Enable job alerts via email!

Senior Data Engineer melrose Arch

Computer Experts Personnel

City of Johannesburg Metropolitan Municipality

On-site

ZAR 600,000 - 800,000

Full time

Today
Be an early applicant

Job summary

A leading IT staffing agency is seeking a skilled Data Engineer to join their growing team in Johannesburg. This role involves developing data pipelines and integrations, collaborating with cross-functional teams, and applying software engineering best practices. Candidates should have over 7 years of experience, with strong skills in Python and SQL, as well as cloud platform experience. This position is key for driving data initiatives and improving data reliability.

Qualifications

  • 7+ years of professional experience as a Data Engineer or similar role.
  • Advanced proficiency in Python for backend development.
  • Strong SQL skills for querying and modeling databases.
  • Experience with cloud platforms such as AWS, GCP, or Azure.
  • Hands-on with containerization technologies like Docker or Kubernetes.
  • Solid understanding of RESTful APIs.

Responsibilities

  • Design, develop, test, and maintain reliable data pipelines.
  • Build and manage API-based data ingestion workflows.
  • Apply software engineering best practices: modular design, testing.
  • Collaborate closely with senior engineers and data scientists.

Skills

Python
SQL
Data ETL pipelines
Containerization
RESTful APIs
CI/CD workflows
Cloud platforms

Tools

AWS
Azure
Docker
Kubernetes
Job description
Overview

We are seeking a Data Engineer to join our growing engineering team. This is a key role for a motivated and technically skilled individual with a solid foundation in software engineering and data systems. You will work on building scalable data infrastructure, implementing robust data integrations, and collaborating with cross-functional teams to solve real-world data challenges.

Responsibilities
  • Design, develop, test, and maintain reliable data pipelines and ETL processes using Python and SQL
  • Build and manage API-based data ingestion workflows and real-time data integrations
  • Apply software engineering best practices: modular design, testing, version control, and documentation
  • Own and optimize data workflows and automation, ensuring efficiency and scalability
  • Collaborate closely with senior engineers, data scientists, and stakeholders to translate business needs into technical solutions
  • Maintain and enhance data reliability, observability, and error handling in production systems
  • Develop and support internal data-driven tools
  • Implement data operations best practices, including automated monitoring, alerting, and incident response for pipeline health
  • Work with data-devops principles: CI/CD for data workflows, infrastructure-as-code, and containerized ETL deployments
What You Bring / Qualifications
  • 7+ years of professional experience as a Data Engineer or in a similar role developing data ETL pipelines
  • Advanced proficiency in Python for backend development and scripting
  • Strong SQL skills with hands-on experience in querying and modeling relational databases
  • Experience with cloud platforms such as AWS, GCP, or Azure
  • Hands-on with containerization technologies like Docker or Kubernetes
  • Solid understanding of RESTful APIs
  • Experience with version control systems (GitHub, GitLab, Bitbucket) and CI/CD workflows
  • Strong grasp of software development lifecycle (SDLC) and principles of clean, maintainable code
  • Demonstrated ability to work independently, own projects end-to-end, and mentor junior engineers
  • Familiarity with AI concepts and prompt engineering is a plus
Nice to Have
  • Experience with data security, privacy compliance, and access controls
  • Knowledge of infrastructure-as-code tools (e.g., Terraform, Helm)
  • Background in event-driven architecture or stream processing
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.