
Enable job alerts via email!
A leading technology company in Kuala Lumpur is seeking an experienced Data Engineer to lead the design and development of scalable data pipelines. The ideal candidate will have over 5 years of experience in data engineering and proficiency in Python and SQL. You will collaborate closely with data scientists and business stakeholders to deliver effective data infrastructure, while employing best practices in data governance and supporting AI/ML initiatives. This role offers competitive benefits in a dynamic environment.
Add expected salary to your profile for insights
Lead the design and development of robust, scalable data pipelines for both traditional analytics and AI/ML workloads.
Build and maintain data architectures including data warehouses, data lakes, and real-time streaming solutions using tools like Redshift, Spark, Flink, and Kafka.
Implement and optimize data orchestration workflows using Airflow and data transformation processes using DBT.
Develop automated data workflows and integrate with DevOps/MLOps frameworks using Docker, Kubernetes, and cloud infrastructure.
Implement best practices for data governance, including data quality, security, compliance, data lineage, and access control.
Collaborate with data scientists, analysts, and business stakeholders to understand technical requirements and deliver reliable data infrastructure.
Demonstrate strong business sensitivity to ensure data solutions align with business objectives and requirements.
Support AI/ML initiatives by building feature stores, vector databases, and real-time inference pipelines.
Continuously explore and adopt new technologies in data engineering and AI/ML space.
Proactively drive new initiatives and mentor junior team members.