Enable job alerts via email!

Senior Data Engineer (Dbt, airflow)

MOL ACCESSPORTAL SDN. BHD.

Kuala Lumpur

On-site

MYR 100,000 - 150,000

Full time

Yesterday
Be an early applicant

Job summary

A leading technology company in Kuala Lumpur is seeking an experienced Data Engineer to lead the design and development of scalable data pipelines. The ideal candidate will have over 5 years of experience in data engineering and proficiency in Python and SQL. You will collaborate closely with data scientists and business stakeholders to deliver effective data infrastructure, while employing best practices in data governance and supporting AI/ML initiatives. This role offers competitive benefits in a dynamic environment.

Qualifications

  • 5+ years of experience in data engineering, focusing on scalable data architectures.
  • Expert proficiency in Python and SQL.
  • Hands-on experience with AWS Redshift, Apache Airflow, and DBT.
  • Strong experience with big data frameworks: Apache Spark, Flink, and Kafka.
  • Solid understanding of Linux, Docker, and Kubernetes.

Responsibilities

  • Lead the design and development of robust, scalable data pipelines.
  • Build and maintain data architectures including data warehouses and data lakes.
  • Implement and optimize data orchestration workflows using Airflow.
  • Collaborate with data scientists and stakeholders to deliver reliable data infrastructure.
  • Support AI/ML initiatives by building feature stores and real-time inference pipelines.

Skills

Python
SQL
Data Architecture
AWS Redshift
Apache Airflow
Docker
Kubernetes
Apache Spark
Apache Flink
Apache Kafka
Data Governance
AI/ML Knowledge

Education

Bachelor's degree in Computer Science, Data Engineering, Statistics, or related field

Tools

DBT (Data Build Tool)
OpenMetadata
Job description

Add expected salary to your profile for insights

Lead the design and development of robust, scalable data pipelines for both traditional analytics and AI/ML workloads.

Build and maintain data architectures including data warehouses, data lakes, and real-time streaming solutions using tools like Redshift, Spark, Flink, and Kafka.

Implement and optimize data orchestration workflows using Airflow and data transformation processes using DBT.

Develop automated data workflows and integrate with DevOps/MLOps frameworks using Docker, Kubernetes, and cloud infrastructure.

Implement best practices for data governance, including data quality, security, compliance, data lineage, and access control.

Collaborate with data scientists, analysts, and business stakeholders to understand technical requirements and deliver reliable data infrastructure.

Demonstrate strong business sensitivity to ensure data solutions align with business objectives and requirements.

Support AI/ML initiatives by building feature stores, vector databases, and real-time inference pipelines.

Continuously explore and adopt new technologies in data engineering and AI/ML space.

Proactively drive new initiatives and mentor junior team members.

Key Qualifications:
  • Bachelor's degree in Computer Science, Data Engineering, Statistics, or related field.
  • 5+ years of experience in data engineering with focus on scalable data architectures.
  • Expert proficiency in Python and SQL programming languages.
  • Hands-on experience with AWS Redshift, Apache Airflow, and DBT (Data Build Tool).
  • Strong experience with big data frameworks: Apache Spark, Apache Flink, and Apache Kafka.
  • Solid understanding of Linux, Docker, and Kubernetes for containerization and orchestration.
  • At least one cloud platform experience (AWS preferred, but GCP or Azure acceptable).
  • Proven experience in dimensional modeling design and implementation.
  • Strong business acumen with sensitivity to business requirements and ability to translate them into robust technical data solutions.
  • Fluent in English (reading, writing, and verbal communication).
  • Experience in data governance including data quality, security, access management, and data lineage.
  • Foundational knowledge of AI/ML workflows, model deployment pipelines, and LLM integration patterns.
  • Demonstrated ability to lead technical initiatives and drive adoption of new technologies independently.
  • Strong analytical and communication skills with experience working across cross-functional teams.
Nice to Have:
  • Experience with OpenMetadata for data catalog and governance.
  • Experience in gaming, e-commerce, or fintech industries.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.