Enable job alerts via email!

Engineer, Data

Standard Bank Group

Johannesburg

On-site

ZAR 500,000 - 750,000

Full time

4 days ago
Be an early applicant

Job summary

A leading financial services group in Johannesburg is looking for a Data Engineer to design and implement scalable data pipelines using Microsoft Fabric and Azure technologies. The ideal candidate will have at least 3 years of experience and proficiency in SQL and Python. This role offers the opportunity to work with talented professionals in a dynamic environment, focusing on data quality and integration across platforms.

Qualifications

  • Minimum 3 years’ experience in a Data engineering role.
  • Strong hands-on experience in Microsoft Fabric and Azure technologies.
  • Experience with CI/CD pipelines, DevOps, or machine learning workflows is a plus.

Responsibilities

  • Design and implement scalable and efficient data pipelines.
  • Develop ETL/ELT processes using Azure Data Factory and Python.
  • Ensure data quality, integrity, and security across all platforms.

Skills

SQL
Python
Power BI
Data engineering

Education

Bsc Computer Science or Information Technology
Microsoft certification in Azure Data Engineering or Microsoft Fabric

Tools

Microsoft Fabric
Azure Data Factory
Azure Synapse
Azure SQL
Databricks

Job description

Company Description

Standard Bank Group is a leading Africa-focused financial services group, and an innovative player on the global stage, that offers a variety of career-enhancing opportunities – plus the chance to work alongside some of the sector’s most talented, motivated professionals. Our clients range from individuals, to businesses of all sizes, high net worth families and large multinational corporates and institutions. We’re passionate about creating growth in Africa. Bringing true, meaningful value to our clients and the communities we serve and creating a real sense of purpose for you.

Job Description

To design and implement scalable and efficient data pipelines using Microsoft Fabric components such as OneLake, Dataflows Gen2, and Lakehouse. Develop ETL/ELT processes using Azure Data Factory, PySpark, Spark SQL, and Python. Ensure data quality, integrity, and security across all platforms. Collaborate with stakeholders to gather requirements and deliver technical solutions. Optimize data workflows and troubleshoot performance issues. Support hybrid cloud deployments and integrate on-premises and cloud environments while maintaining documentation and following best practice in data engineering, including version control and modular code design.

Qualifications

  • Bsc Computer Science or Information Technology as well as Microsoft certification in Azure Data Engineering or Microsoft Fabric
  • Minimum 3 years’ experience in a Data engineering role with strong hands-on experience in Microsoft Fabric, Azure Synapse, Azure SQL, and Databricks
  • Proficiency in SQL, Python, and Power BI
  • Solid understanding of data modelling, data governance, and data warehousing
  • Experience with CI/CD pipelines, DevOps, or machine learning workflows is a plus.

Additional Information

Behavioural Competencies:

  • Adopting Practical Approaches
  • Checking Things
  • Developing Expertise
  • Embracing Change
  • Examining Information

Technical Competencies:

  • Big Data Frameworks and Tools
  • Data Engineering
  • Data Integrity
  • IT Knowledge
  • Stakeholder Management (IT)

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.