Job Search and Career Advice Platform

Enable job alerts via email!

Senior Data Engineer

Edge Executive Search Group

Johannesburg

On-site

ZAR 200 000 - 300 000

Full time

Today
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A leading logistics company in Johannesburg seeks a Senior Data Engineer to design and maintain scalable ELT pipelines for data integration and quality. This role requires expertise in SQL and experience with ETL tools like Azure Data Factory and Apache Airflow. The ideal candidate will have 3–5 years' experience in data engineering within the logistics sector. Join a dynamic team focused on building reliable data infrastructure to drive operational efficiency and analytics.

Qualifications

  • 3–5 years’ experience in data engineering within logistics or supply chain.
  • Strong experience with ELT/ETL tools.
  • Solid understanding of data modelling and warehousing.

Responsibilities

  • Design, build, and maintain robust ELT pipelines.
  • Integrate data across ERP, WMS, TMS, and IoT platforms.
  • Collaborate with BI and data science teams.

Skills

Data engineering
Advanced SQL
Python
Problem-solving
Attention to detail
Data quality assurance

Education

Degree in Computer Science or a related field

Tools

Azure Data Factory
SSIS
Apache Airflow
Cloud data platforms (Azure, AWS, or GCP)
Job description

Senior Data Engineer | South Africa | Permanent

Build the data backbone that powers smarter logistics decisions.
Own complex pipelines that turn operational data into real-time insight across the supply chain.

This role sits at the core of a Business Intelligence environment, responsible for designing, building, and maintaining a scalable data infrastructure that supports reporting, advanced analytics, and automation. You will work with high-volume operational and supply chain data, ensuring secure, reliable, and timely access for analytics, optimisation, and decision-making across the logistics value chain.

Working closely with BI, analytics, and operational stakeholders, you will integrate multiple data sources, improve data quality, and continuously optimise performance. The environment is fast-paced and operationally critical, requiring strong problem-solving skills, attention to detail, and the ability to work under pressure.

Our client operates in a complex, regulated logistics environment, supporting critical supply chain operations across South Africa. The organisation is data-driven, performance-focused, and committed to using analytics and automation to improve efficiency, visibility, and compliance across its network.

What You’ll Do
  • Design, build, and maintain robust ELT pipelines with high availability and low latency

  • Integrate data across ERP, WMS, TMS, and IoT platforms

  • Manage and optimise data lake and data warehouse environments

  • Develop, optimise, and maintain advanced SQL transformations and pipelines

  • Ensure high standards of data quality, validation, and governance

  • Collaborate with BI and data science teams to deliver trusted datasets

  • Automate recurring data processes to reduce manual effort

  • Investigate data issues, perform root cause analysis, and implement permanent fixes

  • Maintain clear documentation and support data cataloguing initiatives

  • Support data security, access controls, and compliance requirements

What You Bring
  • Degree in Computer Science or a related field

  • 3–5 years’ experience in data engineering, ideally within logistics or supply chain environments

  • Strong experience with ELT/ETL tools such as Azure Data Factory, SSIS, or Apache Airflow

  • Advanced SQL expertise and proficiency in Python or a similar scripting language

  • Solid understanding of data modelling and data warehousing concepts

  • Experience working with cloud data platforms (Azure, AWS, or GCP)

  • Familiarity with version control and CI/CD practices for data products

  • Strong analytical thinking, problem-solving ability, and attention to detail

What Success Looks Like
  • Reliable, well-documented data pipelines with minimal failures

  • High data quality, consistency, and availability across systems

  • Improved query performance and reduced data latency

  • Faster onboarding of new data sources and datasets

  • Measurable reduction in manual effort through automation

  • Trusted datasets that enable accurate reporting and analytics

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.