Job Search and Career Advice Platform

Enable job alerts via email!

Data Engineer

TechBiz Global GmbH

Johannesburg

On-site

ZAR 600 000 - 800 000

Full time

Yesterday
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A leading recruitment service provider in Johannesburg is seeking a talented Data Engineer to design and maintain data ingestion pipelines, optimizing data processes. The candidate should have over 5 years of experience, particularly with Apache Kafka and cloud platforms like AWS and GCP. This role offers an exciting opportunity to work in an innovative environment while collaborating with cross-functional teams to effectively integrate and manage data.

Qualifications

  • 5+ years of hands-on experience as a Data Engineer or similar role.
  • Strong experience with Apache Kafka and Kafka Connect.
  • Proficiency in Python for automation and data handling.

Responsibilities

  • Design and maintain data ingestion pipelines using Kafka Connect.
  • Ingest data from MySQL and PostgreSQL into AWS S3 and GCP BigQuery.
  • Collaborate with teams to understand data requirements.

Skills

Data Engineering
Apache Kafka
Debezium
MySQL
PostgreSQL
AWS S3
GCP BigQuery
Python

Tools

Docker
Kubernetes
Airflow
Terraform
Job description

At TechBiz Global, we are providing recruitment service to our TOP clients from our portfolio. We are currently seeking an Data Engineer to join one of our clients ' teams. If you're looking for an exciting opportunity to grow in a innovative environment, this could be the perfect fit for you.

Key Responsibilities
  • Design, develop, and maintain data ingestion pipelines using Kafka Connect and Debezium for real-time and batch data integration.
  • Ingest data from MySQL and PostgreSQL databases into AWS S3, Google Cloud Storage (GCS), and BigQuery.
  • Implement best practices for data modeling, schema evolution, and efficient partitioning in the Bronze Layer.
  • Ensure reliability, scalability, and monitoring of Kafka Connect clusters and connectors.
  • Collaborate with cross-functional teams to understand source systems and downstream data requirements.
  • Optimize data ingestion processes for performance and cost efficiency.
  • Contribute to automation and deployment scripts using Python and cloud-native tools.
  • Stay updated with emerging data lake technologies such as Apache Hudi or Apache Iceberg.
Required Skills and Qualifications
  • 5+ years of hands‑on experience as a Data Engineer or similar role.
  • Strong experience with Apache Kafka and Kafka Connect (sink and source connectors).
  • Experience with Debezium for change data capture (CDC) from RDBMS.
  • Proficiency in working with MySQL and PostgreSQL.
  • Hands‑on experience with AWS S3, GCP BigQuery, and GCS.
  • Proficiency in Python for automation, data handling, and scripting.
  • Understanding of data lake architectures and ingestion patterns.
  • Solid understanding of ETL / ELT pipelines, data quality, and observability practices.
Good to Have
  • Experience with containerization (Docker, Kubernetes).
  • Familiarity with workflow orchestration tools (Airflow, Dagster, etc.).
  • Exposure to infrastructure-as-code tools (Terraform, CloudFormation).
  • Familiarity with data versioning and table
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.