Job Search and Career Advice Platform

Enable job alerts via email!

Data Engineer (Data Lake / BI / Analytics)

ZONBERATION GROUP (MALAYSIA) Sdn Bhd

Kuala Lumpur

On-site

MYR 80,000 - 120,000

Full time

Yesterday
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A leading data solutions provider in Kuala Lumpur is seeking multiple Data Engineers for tracks in Big Data and Business Intelligence. The ideal candidates will have over 3 years of experience in data engineering with expertise in SQL and Python. Responsibilities include optimizing ETL processes for data lakes and maintaining Power BI platforms. Proficiency in English is required, and knowledge of Mandarin is a significant advantage. This role provides an opportunity to work in a dynamic environment focused on data-driven solutions.

Qualifications

  • Minimum 3+ years of professional experience in Data Engineering, ETL Development, or BI Analytics.
  • Strong hands-on proficiency in SQL and Python is mandatory for all roles.
  • Experience with Hadoop ecosystem (Spark, HDFS, Hive) and containerization tools.

Responsibilities

  • Design, develop, and optimize ETL processes for massive data lakes using Hadoop and Spark.
  • Maintain and optimize Power BI platforms and automate workflows using PowerShell and Azure.
  • Write high-performance SQL and Python scripts for data processing, cleaning, and automation.

Skills

SQL
Python
Hadoop ecosystem
Power BI
PowerShell
Kafka
Docker/K8s

Tools

Hadoop
Azure
Power BI
Job description

We are currently recruiting for multiple Data Engineer positions ranging from Mid-Level (3+ years) to Senior Specialists. By applying to this single posting, you will be considered for two distinct tracks within our engineering teams: Big Data Infrastructure (Data Lake) or Business Intelligence (BI) & Analytics.

Whether you are a PySpark expert dealing with PB-level data, or a Power BI specialist focusing on Azure and governance, we have a spot for you.

What You Will Do (Depending on the track)
  • For Data Lake Track: Design, develop, and optimize ETL processes for massive data lakes (PB-level volume) using Hadoop and Spark.
  • For BI Track: Maintain and optimize Power BI platforms (workspaces, datasets, DAX) and automate workflows using PowerShell and Azure.
  • Common: Write high-performance SQL and Python scripts for data processing, cleaning, and automation.
  • Common: Collaborate with cross-functional teams to ensure data system stability, security, and governance.
Who We Are Looking For
  • Experience: Minimum 3+ years of professional experience in Data Engineering, ETL Development, or BI Analytics.
  • Core Tech Stack: Strong hands-on proficiency in SQL and Python is mandatory for all roles.
  • Track-Specific Skills (You should match at least one):
    • Track A (Big Data): Experience with Hadoop ecosystem (Spark, HDFS, Hive) and containerization tools (Docker/K8s).
    • Track B (BI & Cloud): Experience with Power BI (DAX/Dataflows), Azure Cloud, and scripting with PowerShell.
  • Bonus Skills: Experience with log analysis (Network/User logs), Data Governance, or Linux system administration.
  • Language Requirement: Proficiency in English (Reading/Writing) is required; Mandarin proficiency is a HUGE ADVANTAGE and mandatory for specific project streams involving regional collaboration.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.