Job Search and Career Advice Platform

Enable job alerts via email!

Senior Data Engineer Databricks and Azure

Unison Consulting Pte Ltd

Kuala Lumpur

On-site

MYR 90,000 - 120,000

Full time

30+ days ago

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A consulting firm in Kuala Lumpur is seeking a Senior Data Engineer with expertise in Databricks, PySpark, and Delta Lake. The candidate will design scalable ETL/ELT solutions, implement data architectures, and lead projects involving data ingestion from various sources. The ideal applicant should have 5–8+ years of Data Engineering experience along with strong technical leadership skills. This role includes automation and optimization of workflows, offering a chance to guide junior team members in a dynamic environment.

Qualifications

  • 5–8+ years of experience in Data Engineering.
  • Strong hands‑on experience with Databricks, PySpark, Delta Lake, SQL, Python.
  • Experience with Azure Data Lake, ADF, or AWS equivalents.
  • Ability to optimize Spark workloads.
  • Familiarity with Git, Azure DevOps CI/CD, and YAML pipelines.

Responsibilities

  • Design, build, and optimize data ingestion and transformation pipelines.
  • Implement Delta Lake and Medallion architecture.
  • Develop ingestion frameworks for data sources.
  • Automate workflows using Databricks Workflows and Azure Functions.
  • Provide technical leadership and guidance to junior team members.

Skills

Data Engineering
Databricks
PySpark
Delta Lake
SQL
Python
Azure Data Lake
ADF
Azure Functions
Git

Tools

Terraform
Power BI
PostgreSQL
Job description
Overview

We are looking for a Senior Data Engineer with strong expertise in Databricks, PySpark, Delta Lake, and cloud-based data pipelines. The ideal candidate will design and build scalable ETL/ELT solutions, implement Lakehouse/Medallion architectures, and integrate data from multiple internal and external systems. This role requires strong technical leadership and hands-on architecture experience.

Key Responsibilities
  • Design, build, and optimize data ingestion and transformation pipelines using Databricks, PySpark, and Python.
  • Implement Delta Lake and Medallion architecture for scalable enterprise data platforms.
  • Develop ingestion frameworks for data from SFTP, REST APIs, SharePoint/Graph API, AWS, and Azure sources.
  • Automate workflows using Databricks Workflows, ADF, Azure Functions, and CI/CD pipelines.
  • Optimize Spark jobs for performance, reliability, and cost efficiency.
  • Implement data validation, quality checks, and monitoring with automated alerts and retries.
  • Design secure and governed datasets using Unity Catalog and cloud security best practices.
  • Collaborate with analysts, business users, and cross-functional teams to deliver curated datasets for reporting and analytics.
  • Provide technical leadership and guidance to junior team members.
Required Skills
  • 5–8+ years of experience in Data Engineering.
  • Strong hands‑on experience with Databricks, PySpark, Delta Lake, SQL, Python.
  • Experience with Azure Data Lake, ADF, Azure Functions, or AWS equivalents (S3, Lambda).
  • Experience integrating data from APIs, SFTP servers, vendor data providers, and cloud storage.
  • Knowledge of ETL/ELT concepts, Lakehouse/Meddalion architecture, and distributed processing.
  • Strong experience with Git, Azure DevOps CI/CD, and YAML pipelines.
  • Ability to optimize Spark workloads (partitioning, caching, Z‑ordering, performance tuning).
Good to Have
  • Exposure to Oil & Gas or trading analytics (SPARTA, KPLER, IIR, OPEC).
  • Knowledge of Power BI or data visualization concepts.
  • Familiarity with Terraform, Scala, or PostgreSQL.
  • Experience with SharePoint development or .NET (optional).
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.