
Enable job alerts via email!
Generate a tailored resume in minutes
Land an interview and earn more. Learn more
A leading tech company based in Kuala Lumpur seeks an experienced Data Engineer to take ownership of cloud data architecture and implement robust data solutions using AWS services. The ideal candidate has a minimum of 7 years in data engineering, with extensive hands-on experience in AWS. Key responsibilities include designing high-performance ETL/ELT pipelines and ensuring data integrity and security. Strong communication and mentoring skills are essential. Familiarity with tools like Power BI and Snowflake is a plus.
Take end-to-end ownership of our cloud data architecture—designing, developing, and implementing a robust data warehouse using AWS services such as S3, Glue, Redshift, Lambda, Step Functions etc.
Lead the evolution of our data infrastructure with a long-term vision, ensuring scalability, reliability, and performance.
Define and enforce high standards across data engineering—driving excellence in source control, automation, testing, and deployment.
Ensure data integrity, governance, and security are embedded throughout the pipeline, delivering datasets stakeholders can depend on.
Promote engineering best practices through clean, well-documented code, peer reviews, and strong CI/CD workflows.
Partner closely with analytics, business, and IT teams to understand needs and co-create scalable, user-friendly data solutions.
Design and maintain high-performance ETL/ELT pipelines to rapidly transform raw data into ready-to-use, structured datasets.
Continuously optimize data models (e.g., star schema) for analytics and reporting, accelerating decision-making across the business.
Stay ahead of the curve on emerging AWS technologies, recommending innovations that help us better serve our customers and scale smarter.
Minimum 7 years of experience in data engineering, with at least 3 years working in cloud-based environments (preferably AWS).
Strong hands-on experience with AWS S3, Glue, Redshift, Lambda, Step Functions, and other core AWS services.
Proficient in SQL and Python for data transformation and automation.
Experience in building and managing data models and data pipelines for large-scale data environments.
Solid understanding of data warehousing principles, data lakes, and modern data architecture.
Experience leading and mentoring data engineering teams.
Strong communication and collaboration skills to work with cross-functional teams.
Experience with Power BI, Snowflake on AWS, or Databricks is a plus.
Exposure to DevOps practices such as CI/CD.
Familiarity with data governance and security frameworks in AWS