Job Search and Career Advice Platform

Enable job alerts via email!

Databricks Machine Learning Lead

AVANADE ASIA PTE LTD

Kuala Lumpur

On-site

MYR 120,000 - 180,000

Full time

Yesterday
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A leading technology consultancy based in Kuala Lumpur is seeking a Databricks M/L Technical Lead. This senior role involves designing, developing, and delivering scalable data solutions on the Databricks Lakehouse Platform. Candidates should have a strong background in Python and PySpark/Scala, with at least 8 years of experience in machine learning and specific expertise in Databricks. The role emphasizes performance optimization and defining coding standards within a team environment.

Qualifications

  • 8+ years of experience in machine learning, data science, or MLOps.
  • 4+ years of experience on the Databricks Lakehouse Platform.
  • Expert proficiency in Python and PySpark/Scala.
  • Deep understanding of Delta Lake architecture.

Responsibilities

  • Define and implement robust data architectures using Databricks.
  • Write high‑quality code in PySpark/Scala and SQL for ETL/ELT pipelines.
  • Lead performance tuning for large‑scale Spark jobs.
  • Define and enforce technical standards and best practices.

Skills

Machine learning
Data science
MLOps
Python
PySpark
Scala

Tools

Databricks Lakehouse Platform
MLflow
Delta Lake
Terraform
Kafka
Job description

The Databricks M/L Technical Lead is a senior, hands‑on role responsible for the design, development, and delivery of highly scalable, secure, and performant data solutions on the Databricks Lakehouse Platform. It is expected to provide technical leadership to a team of engineers, defining coding standards, implementing architectural patterns, and ensuring the delivery of high-quality data products.

Key Responsibilities
  • Lead the Design: Define and implement robust data architectures utilizing the Databricks ecosystem, including Delta Lake, Unity Catalog, Machine Learning Models and Databricks Workflows.
  • Hands‑on Development: Serve as the most senior developer, writing high‑quality, production‑grade code in PySpark/Scala and SQL for complex batch and streaming ETL/ELT pipelines.
  • Performance Optimization: Lead performance tuning and optimization efforts for large‑scale Spark jobs, ensuring efficient cluster utilization and cost management.
  • Standards & Best Practices: Define and enforce technical standards, code quality, testing frameworks (unit, integration), and DataOps/CI/CD pipelines for the engineering team.
Qualifications
  • 8+ years of experience in machine learning, data science, or MLOps, with at least 4+ years focused specifically on the Databricks Lakehouse Platform.
  • Expert proficiency in Python and PySpark/Scala for large‑scale data processing and machine learning.
  • Deep understanding and practical experience with Delta Lake architecture and optimization techniques.
  • Proven expertise implementing MLOps principles using MLflow (Tracking, Registry, Projects, and Deployment).
Preferred Skills & Certifications
  • Experience with Databricks features like Databricks Workflows, and Unity Catalog.
  • Experience with streaming technologies (e.g., Kafka, Spark Streaming).
  • Familiarity with CI/CD tools and Infrastructure-as-Code (e.g., Terraform, Databricks Asset Bundles).
  • Databricks Certified Machine Learning Professional certified.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.