Enable job alerts via email!

Lead Data Engineer

Monroe Consulting Group

Malaysia

On-site

MYR 120,000 - 180,000

Full time

Today
Be an early applicant

Job summary

A leading recruitment agency is seeking a Lead Data Engineer to design and implement a cloud-native data warehouse using AWS. This role involves data architecture ownership, mentoring a team, and ensuring data integrity and security. The ideal candidate will have extensive experience in data engineering, strong AWS expertise, and proven leadership skills. Join a dynamic environment driving innovation and delivering exceptional data solutions.

Qualifications

  • 6-7 years of experience in data engineering, with at least 3 years in cloud environments, ideally AWS.
  • Strong hands-on experience with AWS services.
  • Proficient in SQL and Python for data transformation.
  • Experience in building data models and pipelines for large data environments.
  • Solid understanding of data warehousing and modern data architecture.

Responsibilities

  • Own the design and implementation of a cloud data warehouse.
  • Lead the evolution of data infrastructure ensuring scalability.
  • Enforce high standards in data engineering.
  • Ensure data integrity and security throughout the data pipeline.
  • Mentor the data engineering team.

Skills

Data engineering
AWS S3
SQL
Python
Data governance
Team leadership
Collaboration

Tools

AWS Glue
AWS Redshift
AWS Lambda
AWS Step Functions
AWS Athena
Job description
Lead Data Engineer

Monroe Consulting Group's Technology Division is recruiting on behalf of a highly established Government-Linked Company (GLC) with a robust footprint in the consumer, logistics, and technology sectors. Our client is recognised for driving transformative initiatives, embracing innovation, and delivering long-term value to stakeholders.

Job Summary

We are seeking an experienced and driven Lead Data Engineer to spearhead the design and development of a modern, cloud-native data warehouse on AWS. This role is critical to building a scalable, secure, and efficient data platform that supports analytics, reporting, and AI use cases across the organization. The ideal candidate is both technically hands-on and capable of leading a team to deliver enterprise-grade data solutions.

Job Responsibilities
  • Cloud Data Architecture Ownership - Take end-to-end ownership of cloud data architecture-designing, developing, and implementing a robust data warehouse using AWS services such as S3, Glue, Redshift, Lambda, Step Functions, and Athena.
  • Infrastructure Evolution - Lead the evolution of data infrastructure with a long-term vision, ensuring scalability, reliability, and performance to support growing business needs.
  • Engineering Excellence - Define and enforce high standards across data engineering-driving excellence in source control, automation, testing, and deployment through clean, well-documented code and strong CI/CD workflows.
  • Data Governance & Security - Ensure data integrity, governance, and security are embedded throughout the pipeline, delivering datasets stakeholders can depend on with confidence.
  • Technical Mentorship - Act as a trusted technical mentor, growing the skillset of your team and raising the bar on data engineering quality through peer reviews and knowledge sharing.
  • ETL/ELT Pipeline Development - Design and maintain high-performance ETL/ELT pipelines to rapidly transform raw data into ready-to-use, structured datasets.
  • Data Modeling Optimization - Continuously optimize data models (e.g., star schema) for analytics and reporting, accelerating decision-making across the business.
  • Agile Innovation - Embrace agility-identify inefficiencies, ship improvements quickly, and iterate with speed and precision to drive continuous enhancement.
  • Cross-Functional Collaboration - Partner closely with analytics, business, and IT teams to understand needs and co-create scalable, user-friendly data solutions. Break down silos and foster a collaborative, cross-functional approach to solving complex data challenges.
  • Inclusive Leadership - Lead with empathy and clarity-creating an inclusive team culture where knowledge is shared and everyone is set up to succeed.
  • Self-Service Enablement - Build data systems that empower internal stakeholders to self-serve insights and deliver exceptional customer experiences.
  • Innovation & Technology Leadership - Stay ahead of the curve on emerging AWS technologies, recommending innovations that help better serve customers and scale smarter. Translate complex data into actionable solutions that directly impact product, strategy, and customer satisfaction.
Key Requirements
  • Experience - 6-7 years of experience in data engineering, with at least 3 years working in cloud-based environments (preferably AWS).
  • AWS Expertise - Strong hands-on experience with AWS S3, Glue, Redshift, Lambda, Step Functions, and other core AWS services.
  • Technical Proficiency - Proficient in SQL and Python for data transformation and automation.
  • Pipeline & Modeling Experience - Proven experience in building and managing data models and data pipelines for large-scale data environments.
  • Data Architecture Knowledge - Solid understanding of data warehousing principles, data lakes, and modern data architecture patterns.
  • Leadership Experience - Experience leading and mentoring data engineering teams with a track record of developing talent.
  • Communication Skills - Strong communication and collaboration skills to work effectively with cross-functional teams and translate technical concepts to business stakeholders.
Preferred Skills (Advantage)
  • Advanced Platforms - Experience with Snowflake on AWS, or Databricks is a plus.
  • DevOps Practices - Exposure to DevOps practices such as CI/CD for automated deployment and infrastructure as code.
  • Governance Frameworks - Familiarity with data governance and security frameworks in AWS environments.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.