Job Search and Career Advice Platform

Enable job alerts via email!

Senior Databricks Data Engineer

Jobgether

Remote

GBP 70,000 - 90,000

Full time

Today
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A leading recruitment intermediary is seeking a Senior Databricks Data Engineer in the United Kingdom. The role involves designing and optimizing data pipelines using Azure Databricks, collaborating with architects and security teams. Candidates should possess at least 5 years of experience with Azure Databricks and a strong proficiency in Python and SQL. The position supports a fully remote work environment and offers the opportunity to work on significant data projects across the enterprise.

Benefits

Competitive compensation
Fully remote work environment
Comprehensive benefits plan

Qualifications

  • Minimum 5 years of professional experience delivering Azure Databricks solutions.
  • Strong expertise in Databricks components and Azure Data Platform services.
  • Proficiency in Python, SQL, PySpark, Git, and distributed computing.

Responsibilities

  • Design, develop, and optimize ETL/ELT data pipelines using Azure Databricks.
  • Configure and maintain Databricks workspaces and clusters.
  • Implement and enforce data governance and security best practices.

Skills

Azure Databricks solutions
Python
SQL
PySpark
Data governance
Agile methodologies
Fluency in Portuguese
Fluency in English

Tools

Terraform
Azure Data Platform services
MLflow
Job description

This position is posted by Jobgether on behalf of a partner company. We are currently looking for a Senior Databricks Data Engineer in United Kingdom.

In this role, you will design, build, and optimize enterprise‑scale data pipelines on Azure Databricks, supporting structured, semi‑structured, and unstructured data. You will work closely with data architects, security teams, and business stakeholders to implement best practices for data governance, security, and high‑performance data processing. This position involves hands‑on development of Delta Lake architectures, CI/CD pipelines, and orchestrated workflows to deliver reliable, scalable data products. Operating in a fully remote, collaborative environment, you will have the opportunity to influence the overall data platform strategy while solving complex technical challenges. Your work will enable analytics, reporting, and AI initiatives across the enterprise. The role combines technical depth, architectural insight, and operational excellence, offering strong growth potential in cloud data engineering.

Accountabilities
  • Design, develop, and optimize ETL/ELT data pipelines using Azure Databricks (Python, PySpark, SQL, Delta Lake).
  • Configure and maintain Databricks workspaces, clusters, jobs, repositories, and workflow schedules for multi‑team data product delivery.
  • Implement and enforce data governance and security best practices, including access controls, lineage, and auditing frameworks.
  • Build and maintain Delta Lake architectures with medallion (bronze/silver/gold) layer structures.
  • Integrate Databricks pipelines with Azure Data Platform services, ensuring reliable orchestration, observability, and CI/CD automation.
  • Collaborate with data architects, data owners, and cross‑functional teams to align platform solutions with enterprise standards.
  • Optimize pipeline performance, compute cost, and system efficiency through code‑level and cluster‑level tuning strategies.
Requirements
  • Minimum 5 years of professional experience delivering Azure Databricks solutions in enterprise environments.
  • Strong expertise in Databricks components: Workspaces, Notebooks, Jobs, Workflows, Repos, Unity Catalog, Delta Lake, Delta Live Tables, and MLflow.
  • Solid knowledge of Azure Data Platform services: ADLS Gen2, Azure Key Vault, Azure Monitor, Azure Log Analytics, Azure Entra ID/RBAC, Terraform provider a plus.
  • Experience implementing data security and governance frameworks including access controls, masking, row‑level security, ABAC, governed tags, credential management, lineage, and auditability.
  • Proficiency in Python, SQL, PySpark, Git, Spark performance tuning, and distributed computing concepts.
  • Familiarity with AI/ML lifecycle and MLflow model management.
  • Experience working in Agile or DevOps‑oriented teams, with strong analytical, problem‑solving, and communication skills.
  • Fluency in Portuguese and English.
Benefits
  • Competitive compensation aligned with experience.
  • Fully remote work environment.
  • Delivery of work equipment suited to the role and responsibilities.
  • Comprehensive benefits plan.
  • Opportunity to work with expert teams on high‑impact, large‑scale projects.
  • Exposure to long‑term, strategic client initiatives in diverse industries.
Why Apply Through Jobgether?

We use an AI‑powered matching process to ensure your application is reviewed quickly, objectively, and fairly against the role's core requirements. Our system identifies the top‑fitting candidates, and this shortlist is then shared directly with the hiring company. The final decision and next steps (interviews, assessments) are managed by their internal team.

We appreciate your interest and wish you the best!

Data Privacy Notice

By submitting your application, you acknowledge that Jobgether will process your personal data to evaluate your candidacy and share relevant information with the hiring employer. This processing is based on legitimate interest and pre‑contractual measures under applicable data protection laws (including GDPR). You may exercise your rights (access, rectification, erasure, objection) at any time.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.