Enable job alerts via email!

Principal Engineer – Data Platforms & MLOps (Databricks)

Codvo.ai

Pune District

On-site

INR 20,00,000 - 28,00,000

Full time

Today
Be an early applicant

Job summary

A leading technology services company is seeking a Principal Engineer specializing in Databricks to design and scale enterprise data platforms. The role involves building ETL pipelines, implementing security frameworks, and mentoring engineers. Ideal candidates will have over 8 years in data engineering with a strong background in cloud integration and MLOps. Competitive compensation offered and location is Pune, India.

Qualifications

  • 8+ years in large-scale data engineering or platform engineering.
  • 3+ years hands-on Databricks experience.
  • Proven track record of building and scaling Databricks workloads in production.

Responsibilities

  • Design and implement data architectures on Databricks Lakehouse.
  • Build and optimize ETL/ELT pipelines with Delta Live Tables.
  • Operationalize ML models using MLflow and CI/CD pipelines.

Skills

Databricks experience
Data engineering
Cloud integration
MLOps
Programming in PySpark, SQL, Python

Tools

Databricks
Unity Catalog
MLflow
Job description
Principal Engineer – Data Platforms & MLOps (Databricks)
Company Overview

At Codvo, software and people transformations go hand-in-hand. We are a global empathy-led technology services company. Product innovation and mature software engineering are part of our core DNA. Respect, Fairness, Growth, Agility, and Inclusiveness are the core values that we aspire to live by each day.

We continue to expand our digital strategy, design, architecture, and product management capabilities to offer expertise, outside-the-box thinking, and measurable results.

Role Overview

We are looking for a hands-on Principal Engineer with deep expertise in Databricks to design, build, and scale enterprise-grade data platforms and MLOps pipelines. You will be the technical authority on how enterprises adopt and maximize Databricks — from ingestion to governance to machine learning deployment — and a mentor who raises the bar for engineering excellence.

Key Responsibilities
  • Platform Architecture: Design and implement end-to-end data architectures on Databricks Lakehouse, covering ingestion, transformation, storage, and analytics.
  • Pipelines & Workflows: Build and optimize ETL/ELT pipelines with Delta Live Tables, Spark Structured Streaming, and workflow orchestration.
  • Governance & Security: Implement Unity Catalog, fine-grained access controls, and compliance frameworks across enterprise data estates.
  • MLOps at Scale: Operationalize ML models using MLflow, Model Registry, and CI/CD pipelines integrated with cloud DevOps tools.
  • Performance & Cost Optimization: Tune Databricks clusters, jobs, and workflows for scale, speed, and efficiency across multi-cloud deployments.
  • Client Advisory: Work closely with enterprise stakeholders to provide best practices, reference architectures, and accelerators tailored to their use cases.
  • Mentorship & Standards: Guide engineers in Databricks best practices, enforce coding standards, and lead design/code reviews.
Qualifications
  • 8+ years in large-scale data engineering / platform engineering, with 3+ years hands-on Databricks experience.
  • Deep expertise in:
    • Databricks Lakehouse Platform (Delta Lake, Delta Live Tables, Databricks SQL).
    • Governance & Security with Unity Catalog.
    • MLOps with MLflow and model lifecycle management.
  • Strong programming skills in PySpark, SQL, Python; experience with Scala a plus.
  • Hands-on with cloud integration (AWS, Azure, or GCP) and DevOps pipelines (Terraform, GitHub Actions, Azure DevOps, etc.).
  • Proven track record of building and scaling Databricks workloads in production for enterprise clients.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.