Job Search and Career Advice Platform

Enable job alerts via email!

Senior Data Engineer

Ploy Asia

Kuala Lumpur

Hybrid

MYR 100,000 - 150,000

Full time

Yesterday
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A leading data solutions company is seeking a Senior Data Engineer in Kuala Lumpur to tackle complex data challenges in a hybrid work environment. The candidate will design high-performance data pipelines, mentor junior staff, and collaborate with cross-functional teams. A minimum of 5 years in data engineering, along with expertise in Apache Spark, Airflow, and cloud platforms like AWS or Azure, is essential. This role offers a competitive salary of up to MYR 19,000 per month.

Qualifications

  • 5+ years of hands-on experience in data engineering roles in large-scale environments.
  • Proven expertise with distributed processing platforms.
  • Experience architecting and deploying data solutions on cloud platforms.

Responsibilities

  • Design and maintain scalable ETL pipelines for data processing.
  • Architect and optimize cloud-based data solutions.
  • Mentor junior engineers and shape best practices in data engineering.

Skills

Apache Spark
Airflow
Hadoop
Kafka
Python
Java
Scala

Tools

AWS
Azure
GCP
Job description

Location: Kuala Lumpur, Malaysia | Hybrid Work Model
Job Type: Permanent
Salary: Up to MYR 19,000 per month

We are seeking an experienced Senior Data Engineer to join our growing Data and AI team in Kuala Lumpur. This role offers the opportunity to work on complex, large‑scale data challenges in a fast‑evolving data‑driven environment, collaborating closely with cross‑functional teams across Engineering, Data, and Product.

As a Senior Data Engineer, you will design and implement scalable, high‑performance data pipelines and infrastructure to power data products and analytics solutions used globally. You'll also play a key role in mentoring junior engineers and shaping best practices in data engineering.

Key Responsibilities:
  • Data Pipeline Development: Design, build, and maintain scalable and reliable ETL pipelines for ingesting and processing data from multiple sources.
  • Data Infrastructure: Architect, deploy, and optimize cloud‑based data solutions to support high‑volume and real‑time data workloads.
  • System Optimization: Ensure data systems are efficient, resilient, and scalable as the organization continues to grow.
  • Collaboration: Work closely with data scientists, engineers, and product managers to understand data requirements and deliver robust solutions.
  • Mentorship: Provide technical guidance, mentorship, and best practices to junior engineers within the team.
  • Innovation: Evaluate new data engineering tools, frameworks, and architectures to continuously improve data reliability and performance.
Qualifications:
  • Experience: 5+ years of hands‑on experience in data engineering roles, preferably within large‑scale or product‑based environments.
  • Technical Skills:
    • Proven expertise with distributed processing platforms such as Apache Spark, Airflow, and Hadoop.
    • Strong understanding of data engineering tools and messaging systems like Kafka.
    • Proficiency in Python, Java, or Scala, with solid knowledge of data structures and algorithmic complexities.
    • Experience in building full data pipeline from raw data ingestion to transformation.
  • Cloud Platforms: Experience architecting and deploying data solutions on AWS, Azure, or GCP.
  • Communication: Excellent communication and collaboration skills with the ability to engage stakeholders across technical and non‑technical functions.
  • Nice to Have: Experience with modern data warehousing, data lakes, and real‑time analytics pipelines is highly desirable.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.