Enable job alerts via email!

Data Engineer

Zensar Technologies

Johannesburg

On-site

ZAR 500 000 - 700 000

Full time

Today
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A technology solutions company is seeking a skilled Data Engineer to shape the backbone of their data ecosystem in Johannesburg. You will design and maintain data pipelines and collaborate with analysts and data scientists. Ideal candidates will have 3–6 years of experience in data engineering, strong SQL and Python skills, and cloud experience. This is a contract role of 6 to 12 months with potential for renewal.

Qualifications

  • 3–6 years in data engineering.
  • Strong SQL & Python skills.
  • Cloud experience (Azure, AWS, or GCP).

Responsibilities

  • Build and optimize ETL/ELT pipelines.
  • Design data models, lakes, and warehouses.
  • Collaborate with analysts & data scientists.

Skills

Data engineering
SQL
Python
ETL tools
Cloud experience

Tools

Azure
AWS
GCP
ADF
Airflow
Snowflake
Redshift
BigQuery
Job description

We’re building smarter data pipelines, want to help?😎

Location: Johannesburg

Role Type: CONTRACT role of 6 to 12 months (possible to renew)

We’re looking for a skilled Data Engineer to join our Zensar team and play a key role in shaping the backbone of our data ecosystem. You’ll design, build, and maintain robust data pipelines and scalable infrastructure that power advanced analytics and business intelligence across the organization. If you love turning raw data into powerful insights, this is the role for you!

What you’ll do:

  • Build and optimize ETL/ELT pipelines
  • Design data models, lakes, and warehouses
  • Collaborate with analysts & data scientists
  • Ensure data quality, integrity & security
  • Troubleshoot and improve data workflows

What we’re looking for:

  • 3–6 years in data engineering
  • Strong SQL & Python skills
  • Cloud experience (Azure, AWS, or GCP)
  • Familiar with ETL tools (ADF, Airflow)
  • Knowledge of warehousing (Snowflake, Redshift, BigQuery)

Nice to have: Streaming (Kafka, Spark), DevOps/CI-CD, ML pipeline exposure

Join us and work with cutting‑edge tools, talented teams, and data that drives real business impact!✨

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.