Enable job alerts via email!

Senior Data Engineer

Kuda

Cape Town

On-site

ZAR 800 000 - 1 200 000

Full time

Today
Be an early applicant

Job summary

A leading financial technology company is seeking a Senior Data Engineer in Western Cape, Cape Town. The ideal candidate should have over 7 years of experience and strong expertise in SQL and Python, with responsibilities including designing data pipelines and mentoring junior engineers. This role offers the chance to impact banking innovation and data-driven strategies.

Qualifications

  • 7+ years of experience in data engineering, leading teams.
  • Experience with CI/CD for data and containerization.
  • Familiarity with machine learning pipelines is a plus.

Responsibilities

  • Design, develop, and optimise large-scale data pipelines.
  • Lead the integration of multi-cloud data platforms.
  • Establish data quality frameworks and governance.
  • Mentor junior and mid-level data engineers.
  • Drive automation in data workflows.

Skills

Expert-level SQL skills
Experience with streaming architectures
Strong programming skills in Python
Knowledge of data ingestion tools
Strong understanding of data architectures
Familiarity with infrastructure-as-code tools
Proficient in Agile methodologies

Tools

Microsoft SQL Server
Google BigQuery
Dbt. Cloud or Dataform
Apache Kafka
Docker
Kubernetes
Job description
Overview

Job title : Senior Data Engineer

Job Location : Western Cape, Cape Town

Deadline : November 17, 2025

Role Overview

We are expanding our reach and seeking a visionary Senior Data Engineer to spearhead our data engineering efforts, driving innovation and growth. With a passion for data-driven decision-making, you will play a pivotal role in shaping the future of banking for millions.

Responsibilities
  • Design, develop, and optimise large-scale data ingestion, transformation, and processing pipelines for structured, semi-structured, and unstructured data.
  • Lead the integration of multi-cloud and hybrid data platforms (e.g., Azure SQL, Google BigQuery, on-premises SQL Server).
  • Define and enforce data architecture standards to ensure scalability, security, and optimal performance.
  • Leverage Dataform to manage SQL-based transformations, version control, testing, and deployment of analytics datasets in BigQuery.
  • Introduce and manage real-time streaming solutions (e.g., Kafka, Pub / Sub, or Dataflow) in conjunction with batch data pipelines.
  • Data Quality & Governance
  • Establish data quality frameworks with automated validation, anomaly detection, and reconciliation checks.
  • Collaborate with Data Governance teams to maintain data catalogues metadata management, and lineage tracking.
  • Implement security, privacy, and compliance standards (such as GDPR, NDPR, and ISO 27001) within data pipelines.
  • Mentor junior and mid-level data engineers, providing technical guidance and career development support.
  • Partner with Data Science and BI teams to deliver data products for predictive modelling, experimentation, and self-service analytics.
  • Act as a subject matter expert in cross-functional projects, advising on technical trade-offs and best practices.
  • Research and adopt emerging data engineering technologies and methodologies.
  • Drive automation in data workflows to reduce manual intervention and operational risk.
  • Optimise data storage and compute costs through partitioning, clustering, and workload management.
Requirements
  • 7+ years of experience in data engineering, with a proven track record of leading teams.
  • Expert-level SQL skills, including advanced query optimisation and performance tuning.
  • Proven experience with :
  • Microsoft SQL Server / Azure SQL DB / Azure Managed Instance
  • Google BigQuery & Google Cloud Platform
  • Dbt. Cloud or Dataform for data modelling, testing, and deployment
  • Data ingestion tools (e.g., Airbyte, Azure Data Factory and Fivetran)
  • Strong programming skills in Python (preferred) and at least one additional language (Java, Scala, or Go).
  • Experience with streaming architectures (Kafka, Pub / Sub, Spark Streaming, or Flink).
  • Familiarity with infrastructure-as-code tools (Terraform, Pulumi, or Deployment Manager).
  • Strong understanding of modern data architectures (Medallion, Data Mesh Lakehouse).
  • Hands-on experience with CI / CD for data and containerization (Docker, Kubernetes).
  • Proficient in Agile delivery methodologies.
Preferred
  • Knowledge of machine learning pipelines (Vertex AI, MLflow, or SageMaker).
  • Prior experience working in FinTech or other regulated industries.
  • Exposure to Looker or similar BI tools.
  • Research / Data Analysis jobs
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.