Job Search and Career Advice Platform

Enable job alerts via email!

Senior Data Engineer(Databricks and Azure)

Unison Group

Kuala Lumpur

On-site

MYR 80,000 - 120,000

Full time

Today
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A leading data services firm in Kuala Lumpur is seeking a Senior Data Engineer with expertise in Databricks and Azure. The role involves designing scalable ETL/ELT solutions, implementing Medallion architectures, and leading data integration from various sources. Ideal candidates will have 5–8+ years in data engineering, strong skills in Databricks, PySpark, and cloud data pipelines. An opportunity to work in a dynamic environment that values innovation and collaboration awaits the right candidate.

Qualifications

  • 5–8+ years of experience in Data Engineering.
  • Strong hands-on experience with Databricks, PySpark, Delta Lake, SQL, Python.
  • Experience with Azure Data Lake and AWS equivalents.

Responsibilities

  • Design, build, and optimize data ingestion and transformation pipelines.
  • Implement Delta Lake and Medallion architecture.
  • Develop ingestion frameworks for data from APIs, SFTP, and cloud sources.

Skills

Data Engineering
Databricks
PySpark
Delta Lake
SQL
Python
Azure Data Lake
Git
Git, Azure DevOps CI/CD

Tools

Azure Functions
Terraform
Power BI
Job description
Senior Data Engineer(Databricks and Azure)

We are looking for a Senior Data Engineer with strong expertise in Databricks, PySpark, Delta Lake, and cloud-based data pipelines. The ideal candidate will design and build scalable ETL/ELT solutions, implement Lakehouse/Medallion architectures, and integrate data from multiple internal and external systems. This role requires strong technical leadership and hands-on architecture experience.

Key Responsibilities

  • Design, build, and optimize data ingestion and transformation pipelines using Databricks, PySpark, and Python.
  • Implement Delta Lake and Medallion architecture for scalable enterprise data platforms.
  • Develop ingestion frameworks for data from SFTP, REST APIs, SharePoint/Graph API, AWS, and Azure sources.
  • Automate workflows using Databricks Workflows, ADF, Azure Functions, and CI/CD pipelines.
  • Optimize Spark jobs for performance, reliability, and cost efficiency.
  • Implement data validation, quality checks, and monitoring with automated alerts and retries.
  • Design secure and governed datasets using Unity Catalog and cloud security best practices.
  • Collaborate with analysts, business users, and cross-functional teams to deliver curated datasets for reporting and analytics.
  • Provide technical leadership and guidance to junior team members.

Required Skills

  • 5–8+ years of experience in Data Engineering.
  • Strong hands-on experience with Databricks, PySpark, Delta Lake, SQL, Python.
  • Experience with Azure Data Lake, ADF, Azure Functions, or AWS equivalents (S3, Lambda).
  • Experience integrating data from APIs, SFTP servers, vendor data providers, and cloud storage.
  • Knowledge of ETL/ELT concepts, Lakehouse/Meddalion architecture, and distributed processing.
  • Strong experience with Git, Azure DevOps CI/CD, and YAML pipelines.
  • Ability to optimize Spark workloads (partitioning, caching, Z-ordering, performance tuning).

Good to Have

  • Exposure to Oil & Gas or trading analytics (SPARTA, KPLER, IIR, OPEC).
  • Knowledge of Power BI or data visualization concepts.
  • Familiarity with Terraform, Scala, or PostgreSQL.
  • Experience with SharePoint development or .NET (optional).

Be careful - Don’t provide your bank or credit card details when applying for jobs. Don't transfer any money or complete suspicious online surveys. If you see something suspicious, report this job ad.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.