Job Search and Career Advice Platform

Enable job alerts via email!

Data Engineer

Habib Group

Ampang Jaya Municipal Council

On-site

MYR 100,000 - 150,000

Full time

Yesterday
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A leading data solutions company is looking for a Data Engineer to design, build, and maintain data pipelines. This role involves developing data lake platforms, implementing scalable ETL/ELT solutions, and working with hybrid data environments. The ideal candidate will have strong skills in SQL and Python, and experience with Apache Spark, along with 6+ years of experience in data engineering. Attractive salary and benefits are offered, and the position is located in Selangor, Malaysia.

Benefits

Medical allowance
Miscellaneous allowance
Attractive commissions
Competitive salary

Qualifications

  • 6+ years of hands-on experience in data engineering, owning data pipelines end-to-end.
  • Proven experience building and supporting production data platforms.
  • Strong skills in SQL and Python; experience with Apache Spark is a must.

Responsibilities

  • Design, build, and maintain data pipelines for ingesting, transforming, and integrating data.
  • Develop and manage data lake/lakehouse platforms for analytics and BI.
  • Implement scalable ETL/ELT solutions for data processing.

Skills

SQL
Python
Apache Spark
Data Governance
Data Quality Checks

Education

Bachelor’s degree in Computer Science, IT, Engineering, or related field

Tools

Azure
Databricks
CI/CD Pipelines
Job description

You can now search for people on SEEK. Make your profile public so you can be found by more employers. Update profile visibility

Add expected salary to your profile for insights

Design, build, and maintain data pipelines for ingesting, transforming, and integrating data from multiple sources (databases, APIs, files, streaming).

Develop and manage data lake / lakehouse platforms to support analytics, BI, and AI/ML use cases.

Design and support hybrid data environments, combining on-premise systems with cloud platforms (primarily Azure).

Implement scalable ETL/ELT solutions for batch and real-time data processing.

Set up and manage analytics and ML workspaces (e.g. Databricks, notebooks) for team collaboration.

Ensure data solutions are secure, reliable, well-monitored, and optimized for performance and cost.

Implement data quality checks, validation, and monitoring to maintain high data reliability.

Apply data governance and security best practices, including access control, encryption, and metadata/lineage management.

Work closely with data scientists, analysts, and business stakeholders to deliver analytics-ready datasets.

Coordinate with system and data vendors to onboard new data sources and maintain stable integrations.

Translate business requirements into maintainable, production-ready data solutions.

Write high-quality Python, SQL, and Spark code, following engineering best practices.

Support DataOps / MLOps initiatives, including CI/CD, testing, version control, and pipeline observability.

Responsibilities
  • Design, build, and maintain data pipelines for ingesting, transforming, and integrating data from multiple sources (databases, APIs, files, streaming).
  • Develop and manage data lake / lakehouse platforms to support analytics, BI, and AI/ML use cases.
  • Design and support hybrid data environments, combining on-premise systems with cloud platforms (primarily Azure).
  • Implement scalable ETL/ELT solutions for batch and real-time data processing.
  • Set up and manage analytics and ML workspaces (e.g. Databricks, notebooks) for team collaboration.
  • Ensure data solutions are secure, reliable, well-monitored, and optimized for performance and cost.
  • Implement data quality checks, validation, and monitoring to maintain high data reliability.
  • Apply data governance and security best practices, including access control, encryption, and metadata/lineage management.
  • Work closely with data scientists, analysts, and business stakeholders to deliver analytics-ready datasets.
  • Coordinate with system and data vendors to onboard new data sources and maintain stable integrations.
  • Translate business requirements into maintainable, production-ready data solutions.
  • Write high-quality Python, SQL, and Spark code, following engineering best practices.
  • Support DataOps / MLOps initiatives, including CI/CD, testing, version control, and pipeline observability.
Requirements
  • Bachelor’s degree in Computer Science, IT, Engineering, or related field (or equivalent experience).
  • 6+ years of hands-on experience in data engineering, owning data pipelines end-to-end.
  • Proven experience building and supporting production data platforms.
  • Experience working with hybrid environments (on-premise and cloud data systems).
  • Strong skills in SQL and Python; experience with Apache Spark is a must.
  • Good understanding of data lakes, data warehouses, and lakehouse architectures.
  • Hands-on experience with relational and NoSQL databases; exposure to cloud data warehouses is a plus.
  • Experience with real-time/event streaming and data pipeline orchestration tools.
  • Knowledge of data security, access control, encryption, and compliance.
  • Experience implementing data quality checks, validation, and testing.
  • Familiar with Git and CI/CD pipelines (Azure DevOps, GitHub Actions) and basic Infrastructure-as-Code.
  • Able to build high-performance datasets for BI tools such as Power BI, Tableau, or Qlik.
  • Exposure to machine learning pipelines and supporting batch or real-time inference is an advantage.
  • Strong communication skills with the ability to explain technical concepts clearly.
  • Independent, well-organized, and able to work effectively with cross-functional teams and vendors.
Unlock job insights

Your application will include the following questions:

  • What's your expected monthly basic salary?

Perks and benefits Medical Miscellaneous allowance Attractive High Sales Commissions, Competitive Sal

To help fast track investigation, please include here any other relevant details that prompted you to report this job ad as fraudulent / misleading / discriminatory / salary below minimum wage.

Featured jobs

GSR TECHNOLOGY LIMITED

Kuala Lumpur City Centre, Kuala Lumpur, MY

3d ago

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.