Who we are looking for
This is a hands-on Data Engineer position. We are looking for candidate with good knowledge on Bigdata technology and strong development experience with Databricks.
What you will be responsible for
As Data Engineer you will
- Design & Develop custom high throughput and configurable frameworks/libraries
- Architect and implement scalable data ingestion pipelines using Azure Data Factory and Databricks
- Transform raw datasets into structured data lakes (Medallion architecture: bronze, silver, gold)
- Convert Hive Metastore to Unity Catalog and configure associated permissions
- Design and secure Delta tables with appropriate schema and access controls
- Enable business intelligence through Power BI integration and dashboard-ready datasets
What we value
These skills will help you succeed in this role
- Experience performing data analysis and data exploration
- Experience working in an agile delivery environment
- Strong knowledge on Databricks SQL/Scala - Data engineering pipeline
- Strong Experience in Unix, Python and complex SQL
- Strong critical thinking, communication, and problem-solving skills
- Strong hands-on experience in troubleshooting DevOps pipelines and Azure services
- Azure cloud,Apache PySpark
Education & Preferred Qualifications
- Bachelors Degree level qualification in a computer or IT related subject
- 5+ years of Databricks hands on experience
- Data Bricks Data Engineer Professional Certification