
Enable job alerts via email!
Generate a tailored resume in minutes
Land an interview and earn more. Learn more
A technology company in Selangor is seeking a Data Engineer to design, build, and operate data pipelines using Apache Spark and Kafka. The role involves data ingestion, transformation, and close collaboration with Platform Integration Engineers for infrastructure management. Ideal candidates should have strong programming skills in Python, Java, or Scala, and hands-on experience with Spark and Kafka. The position offers a clear growth path towards becoming a Senior Data Engineer within a dedicated engineering team.
We are hiring a Data Engineer to design, build, and operate batch and event-driven data pipelines on a modern on-premise data platform.
This role focuses on data ingestion, transformation, and processing, using Apache Spark and Kafka, supporting analytics, reporting, and operational dashboards. You will work closely with Platform Integration Engineers, who manage the underlying infrastructure and streaming platform.
Apache Spark (SQL / PySpark / Structured Processing), Apache Kafka, Batch & Streaming Data Pipelines, ETL / ELT, CDC (Change Data Capture), PostgreSQL / Relational Databases, Docker / Kubernetes, Linux, On-Prem Data Platform
We do not expect every candidate to meet every requirement, but strong experience in most of the following is important.