Job Search and Career Advice Platform

Enable job alerts via email!

Lead Data Engineer

TechTiera

Kuala Lumpur

On-site

MYR 120,000 - 150,000

Full time

Yesterday
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A leading tech company based in Kuala Lumpur seeks an experienced Data Engineer to take ownership of cloud data architecture and implement robust data solutions using AWS services. The ideal candidate has a minimum of 7 years in data engineering, with extensive hands-on experience in AWS. Key responsibilities include designing high-performance ETL/ELT pipelines and ensuring data integrity and security. Strong communication and mentoring skills are essential. Familiarity with tools like Power BI and Snowflake is a plus.

Qualifications

  • Minimum 7 years of experience in data engineering, with at least 3 years in a cloud environment.
  • Hands-on experience with AWS core services.
  • Proficient in data transformation and automation using SQL and Python.

Responsibilities

  • Take ownership of cloud data architecture and develop a robust data warehouse.
  • Ensure scalability, reliability, and performance of data infrastructure.
  • Optimize data models for analytics and reporting.

Skills

Data engineering experience
AWS services (S3, Glue, Redshift, Lambda, Step Functions)
Proficient in SQL
Proficient in Python
Data modeling
Mentoring skills
Communication
Collaboration

Tools

Power BI
Snowflake on AWS
Databricks
Job description
Responsibilities

Take end-to-end ownership of our cloud data architecture—designing, developing, and implementing a robust data warehouse using AWS services such as S3, Glue, Redshift, Lambda, Step Functions etc.

Lead the evolution of our data infrastructure with a long-term vision, ensuring scalability, reliability, and performance.

Define and enforce high standards across data engineering—driving excellence in source control, automation, testing, and deployment.

Ensure data integrity, governance, and security are embedded throughout the pipeline, delivering datasets stakeholders can depend on.

Promote engineering best practices through clean, well-documented code, peer reviews, and strong CI/CD workflows.

Partner closely with analytics, business, and IT teams to understand needs and co-create scalable, user-friendly data solutions.

Design and maintain high-performance ETL/ELT pipelines to rapidly transform raw data into ready-to-use, structured datasets.

Continuously optimize data models (e.g., star schema) for analytics and reporting, accelerating decision-making across the business.

Stay ahead of the curve on emerging AWS technologies, recommending innovations that help us better serve our customers and scale smarter.

Requirements

Minimum 7 years of experience in data engineering, with at least 3 years working in cloud-based environments (preferably AWS).

Strong hands-on experience with AWS S3, Glue, Redshift, Lambda, Step Functions, and other core AWS services.

Proficient in SQL and Python for data transformation and automation.

Experience in building and managing data models and data pipelines for large-scale data environments.

Solid understanding of data warehousing principles, data lakes, and modern data architecture.

Experience leading and mentoring data engineering teams.

Strong communication and collaboration skills to work with cross-functional teams.

Experience with Power BI, Snowflake on AWS, or Databricks is a plus.

Exposure to DevOps practices such as CI/CD.

Familiarity with data governance and security frameworks in AWS

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.