Job Search and Career Advice Platform

Enable job alerts via email!

Data Engineer

ITMAX SYSTEM BERHAD

Kuala Lumpur

On-site

MYR 100,000 - 150,000

Full time

Yesterday
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A leading data solutions provider in Kuala Lumpur is hiring a Data Engineer to design, build, and operate data pipelines on a modern platform. The role focuses on data ingestion and transformation, utilizing skills in Apache Spark and Kafka. Candidates should have strong programming capabilities and experience with SQL and relational databases. This position offers hands-on experience in a high-impact role within a small team, with opportunities for growth towards senior positions.

Benefits

Ownership of core data pipelines
Clear growth path towards Senior Data Engineer
Hands-on experience with Spark and Kafka

Qualifications

  • Strong programming skills in Python, Java, or Scala.
  • Hands-on experience with Apache Spark for data processing.
  • Working knowledge of messaging platforms like Apache Kafka.
  • Strong SQL skills with experience in PostgreSQL or other RDBMS.

Responsibilities

  • Design and operate large-scale batch processing pipelines.
  • Develop ETL/ELT workflows for analytics and reporting.
  • Implement data ingestion pipelines from various sources.
  • Collaborate on deployment and reliability of data systems.

Skills

Programming skills in Python, Java, or Scala
Experience with Apache Spark
Knowledge of SQL and PostgreSQL
Familiarity with Linux environments
Experience with Apache Kafka

Tools

Apache Spark
Apache Kafka
Docker
Kubernetes
Job description

Add expected salary to your profile for insights

We are hiring a Data Engineer to design, build, and operate batch and event-driven data pipelines on a modern on-premise data platform.

This role focuses on data ingestion, transformation, and processing, using Apache Spark and Kafka, supporting analytics, reporting, and operational dashboards. You will work closely with Platform Integration Engineers, who manage the underlying infrastructure and streaming platform.

Apache Spark (SQL / PySpark / Structured Processing), Apache Kafka, Batch & Streaming Data Pipelines, ETL / ELT, CDC (Change Data Capture), PostgreSQL / Relational Databases, Docker / Kubernetes, Linux, On-Prem Data Platform

Key Responsibilities
  • Design, build, and operate large-scale batch processing pipelines using Apache Spark
  • Develop ETL / ELT workflows for analytics and reporting
  • Implement data ingestion pipelines from databases, APIs, and event sources
  • Optimise data pipelines for performance, reliability, resource efficiency and data quality
Streaming & Event-Driven Data
  • Consume and process events from Kafka
  • Introduce streaming or near-real-time processing when required
  • Work with CDC pipelines and event-based data sources
  • Support schema evolution and downstream data consumption
Collaboration & Operations
  • Collaborate with Platform Integration Engineers on deployment, observability, and reliability
  • Work with backend and application teams on data requirements and integration
  • Participate in troubleshooting data pipeline issues and performance bottlenecks
  • Maintain documentation, data models, and design decisions
Requirements

We do not expect every candidate to meet every requirement, but strong experience in most of the following is important.

Core Skills
  • Strong programming skills in Python, Java, or Scala
  • Hands-on experience with Apache Spark
  • Working knowledge of Apache Kafka or other messaging platforms
  • Strong SQL skills and experience with PostgreSQL or other RDBMS
  • Familiarity with Linux environments
Nice to Have
  • Experience with Apache Flink, Spark Streaming, Kafka Streams
  • Exposure to CDC tools (Debezium, Maxwell, GoldenGate)
  • Experience with Docker and Kubernetes
  • Familiarity with workflow orchestration tools (Airflow or similar)
  • Experience with monitoring, CI/CD, or infrastructure-as-code tools
Who Should Apply
  • Data Engineers who enjoy pipeline design and optimisation
  • Backend engineers moving into data engineering
  • Candidates interested in batch and streaming data systems
  • Strong fundamentals with willingness to learn are welcome
What We Offer
  • Ownership of core data pipelines in a production system
  • Hands-on experience with Spark, Kafka, and distributed data systems
  • Clear growth path toward Senior Data Engineer / Data Architect
  • High-impact role in a small, engineering-focused team
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.