Job Search and Career Advice Platform

Enable job alerts via email!

Senior Data Engineer (Dbt, airflow)

MOL AccessPortal

Kuala Lumpur

On-site

MYR 80,000 - 120,000

Full time

2 days ago
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A leading data solutions provider based in Kuala Lumpur is seeking a talented Data Engineer to design and develop robust and scalable data pipelines. The ideal candidate will have extensive experience in data architecture and governance, with proficiency in tools such as Apache Spark and AWS Redshift. You will work with data scientists to support AI/ML initiatives and mentor junior team members to adopt new technologies. Strong communication and business acumen are essential for aligning data solutions with business objectives.

Qualifications

  • Bachelor's degree in Computer Science, Data Engineering, Statistics, or related field.
  • 5+ years of experience in data engineering focused on scalable data architectures.
  • Expert in Python and SQL programming.

Responsibilities

  • Lead the design and development of scalable data pipelines for analytics and AI/ML workloads.
  • Build and maintain data architectures including data warehouses and data lakes.
  • Implement and optimize data orchestration workflows using Airflow.

Skills

Python
SQL
Data Governance
Dimensional modeling
Analytical skills
Collaboration
Business acumen
English proficiency

Education

Bachelor's degree in Computer Science or related field

Tools

AWS Redshift
Apache Spark
Apache Flink
Apache Kafka
Airflow
DBT
Docker
Kubernetes
Job description

Lead the design and development of robust, scalable data pipelines for both traditional analytics and AI/ML workloads

Build and maintain data architectures including data warehouses, data lakes, and real-time streaming solutions using tools like Redshift, Spark, Flink, and Kafka

Implement and optimize data orchestration workflows using Airflow and data transformation processes using DBT

Develop automated data workflows and integrate with DevOps/MLOps frameworks using Docker, Kubernetes, and cloud infrastructure

Implement best practices for data governance, including data quality, security, compliance, data lineage, and access control

Collaborate with data scientists, analysts, and business stakeholders to understand technical requirements and deliver reliable data infrastructure

Demonstrate strong business sensitivity to ensure data solutions align with business objectives and requirements

Support AI/ML initiatives by building feature stores, vector databases, and real-time inference pipelines

Continuously explore and adopt new technologies in data engineering and AI/ML space

Proactively drive new initiatives and mentor junior team members

  • Lead the design and development of robust, scalable data pipelines for both traditional analytics and AI/ML workloads
  • Build and maintain data architectures including data warehouses, data lakes, and real-time streaming solutions using tools like Redshift, Spark, Flink, and Kafka
  • Implement and optimize data orchestration workflows using Airflow and data transformation processes using DBT
  • Design and implement dimensional modeling solutions, leading dimensional modeling design initiatives
  • Develop automated data workflows and integrate with DevOps/MLOps frameworks using Docker, Kubernetes, and cloud infrastructure
  • Implement best practices for data governance, including data quality, security, compliance, data lineage, and access control
  • Collaborate with data scientists, analysts, and business stakeholders to understand technical requirements and deliver reliable data infrastructure
  • Demonstrate strong business sensitivity to ensure data solutions align with business objectives and requirements
  • Support AI/ML initiatives by building feature stores, vector databases, and real-time inference pipelines
  • Continuously explore and adopt new technologies in data engineering and AI/ML space
  • Proactively drive new initiatives and mentor junior team members
Key Qualifications:
  • Bachelor's degree in Computer Science, Data Engineering, Statistics, or related field
  • 5+ years of experience in data engineering with focus on scalable data architectures
  • Expert proficiency in Python and SQL programming languages
  • Hands‑on experience with AWS Redshift, Apache Airflow, and DBT (Data Build Tool)
  • Strong experience with big data frameworks: Apache Spark, Apache Flink, and Apache Kafka
  • Solid understanding of Linux, Docker, and Kubernetes for containerization and orchestration
  • At least one cloud platform experience (AWS preferred, but GCP or Azure acceptable)
  • Proven experience in dimensional modeling design and implementation
  • Strong business acumen with sensitivity to business requirements and ability to translate them into robust technical data solutions
  • Fluent in English (reading, writing, and verbal communication)
  • Experience in data governance including data quality, security, access management, and data lineage
  • Foundational knowledge of AI/ML workflows, model deployment pipelines, and LLM integration patterns
  • Demonstrated ability to lead technical initiatives and drive adoption of new technologies independently
  • Strong analytical and communication skills with experience working across cross‑functional teams
Nice to Have:
  • Experience with OpenMetadata for data catalog and governance
  • SQL Server database experience
  • Experience in gaming, e-commerce, or fintech industries
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.