
Enable job alerts via email!
Generate a tailored resume in minutes
Land an interview and earn more. Learn more
A leading data solutions provider in Kuala Lumpur is hiring a Data Engineer to design, build, and operate data pipelines on a modern platform. The role focuses on data ingestion and transformation, utilizing skills in Apache Spark and Kafka. Candidates should have strong programming capabilities and experience with SQL and relational databases. This position offers hands-on experience in a high-impact role within a small team, with opportunities for growth towards senior positions.
Add expected salary to your profile for insights
We are hiring a Data Engineer to design, build, and operate batch and event-driven data pipelines on a modern on-premise data platform.
This role focuses on data ingestion, transformation, and processing, using Apache Spark and Kafka, supporting analytics, reporting, and operational dashboards. You will work closely with Platform Integration Engineers, who manage the underlying infrastructure and streaming platform.
Apache Spark (SQL / PySpark / Structured Processing), Apache Kafka, Batch & Streaming Data Pipelines, ETL / ELT, CDC (Change Data Capture), PostgreSQL / Relational Databases, Docker / Kubernetes, Linux, On-Prem Data Platform
We do not expect every candidate to meet every requirement, but strong experience in most of the following is important.