
Enable job alerts via email!
Generate a tailored resume in minutes
Land an interview and earn more. Learn more
A leading logistics company in Johannesburg seeks a Senior Data Engineer to design and maintain scalable ELT pipelines for data integration and quality. This role requires expertise in SQL and experience with ETL tools like Azure Data Factory and Apache Airflow. The ideal candidate will have 3–5 years' experience in data engineering within the logistics sector. Join a dynamic team focused on building reliable data infrastructure to drive operational efficiency and analytics.
Senior Data Engineer | South Africa | Permanent
Build the data backbone that powers smarter logistics decisions.
Own complex pipelines that turn operational data into real-time insight across the supply chain.
This role sits at the core of a Business Intelligence environment, responsible for designing, building, and maintaining a scalable data infrastructure that supports reporting, advanced analytics, and automation. You will work with high-volume operational and supply chain data, ensuring secure, reliable, and timely access for analytics, optimisation, and decision-making across the logistics value chain.
Working closely with BI, analytics, and operational stakeholders, you will integrate multiple data sources, improve data quality, and continuously optimise performance. The environment is fast-paced and operationally critical, requiring strong problem-solving skills, attention to detail, and the ability to work under pressure.
Our client operates in a complex, regulated logistics environment, supporting critical supply chain operations across South Africa. The organisation is data-driven, performance-focused, and committed to using analytics and automation to improve efficiency, visibility, and compliance across its network.
Design, build, and maintain robust ELT pipelines with high availability and low latency
Integrate data across ERP, WMS, TMS, and IoT platforms
Manage and optimise data lake and data warehouse environments
Develop, optimise, and maintain advanced SQL transformations and pipelines
Ensure high standards of data quality, validation, and governance
Collaborate with BI and data science teams to deliver trusted datasets
Automate recurring data processes to reduce manual effort
Investigate data issues, perform root cause analysis, and implement permanent fixes
Maintain clear documentation and support data cataloguing initiatives
Support data security, access controls, and compliance requirements
Degree in Computer Science or a related field
3–5 years’ experience in data engineering, ideally within logistics or supply chain environments
Strong experience with ELT/ETL tools such as Azure Data Factory, SSIS, or Apache Airflow
Advanced SQL expertise and proficiency in Python or a similar scripting language
Solid understanding of data modelling and data warehousing concepts
Experience working with cloud data platforms (Azure, AWS, or GCP)
Familiarity with version control and CI/CD practices for data products
Strong analytical thinking, problem-solving ability, and attention to detail
Reliable, well-documented data pipelines with minimal failures
High data quality, consistency, and availability across systems
Improved query performance and reduced data latency
Faster onboarding of new data sources and datasets
Measurable reduction in manual effort through automation
Trusted datasets that enable accurate reporting and analytics