
Enable job alerts via email!
A leading financial institution in Johannesburg is seeking a Data Engineer to design and implement scalable data pipelines using Microsoft Fabric. The role requires a solid background in data engineering with at least 3 years of experience, proficiency in SQL and Python, and a degree in Computer Science or IT. The position is full-time and does not support remote work.
To design and implement scalable and efficient data pipelines using Microsoft Fabric components such as OneLake Dataflows Gen2 and Lakehouse. Develop ETL / ELT processes using Azure Data Factory PySpark Spark SQL and Python. Ensure data quality integrity and security across all platforms. Collaborate with stakeholders to gather requirements and deliver technical solutions. Optimize data workflows and troubleshoot performance issues. Support hybrid cloud deployments and integrate on-premises and cloud environments while maintaining documentation and following best practice in data engineering including version control and modular code design.
Remote Work: No
Employment Type: Full-time
Apache Hive,S3,Hadoop,Redshift,Spark,AWS,Apache Pig,NoSQL,Big Data,Data Warehouse,Kafka,Scala
Experience: years
Vacancy: 1