Job Search and Career Advice Platform

Enable job alerts via email!

Lead Databricks Engineer - Single Customer View

Sanderson Recruitment Careers

Bournemouth

Hybrid

GBP 85,000

Full time

24 days ago

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A leading recruitment agency seeks a highly skilled Lead Databricks Engineer for a 12-month FTC in Bournemouth. You will design and implement scalable data pipelines, working with Databricks, PySpark, and Azure Data Factory to support the Single Customer View Programme. Ideal candidates should have experience in data engineering best practices, ensuring accurate analytics and compliance. Competitive salary of £85,000 plus benefits offered.

Qualifications

  • Experience in building and maintaining Databricks pipelines using PySpark and SQL.
  • Proficient in orchestrating workflows with Azure Data Factory.
  • Knowledge of Delta Lake and Medallion Architecture.

Responsibilities

  • Build and maintain Databricks pipelines using PySpark and SQL.
  • Orchestrate workflows with Azure Data Factory.
  • Develop Delta Lake tables for business-ready datasets.

Skills

Databricks
PySpark
Azure Data Factory
SQL
Job description

Lead Databricks Engineer - Single Customer View

Location: Bournemouth (Hybrid)

Contract: 12-month FTC

Salary: £85,000 + Benefits

Lead Databricks Engineer : About the Role

Join our Single Customer View (SCV) Programme, a strategic initiative within Financial Services aimed at delivering a unified, trusted view of customer data.

We're seeking a highly skilled Lead Databricks Engineer to design and implement scalable data pipelines that form the backbone of our Lakehouse platform, enabling accurate analytics, reporting, and regulatory compliance.

You'll work with cutting‑edge technologies including Databricks, PySpark, and Azure Data Factory, applying best practices in data engineering and governance to support this critical programme.

Lead Databricks Engineer : Key Responsibilities
  • Build and maintain Databricks pipelines (batch and incremental) using PySpark and SQL.
  • Orchestrate end‑to‑end workflows with Azure Data Factory.
  • Develop and optimise Delta Lake tables (partitioning, schema evolution, vacuuming).

Implement Medallion Architecture (Bronze, Silver, Gold) for transforming raw data into business‑ready datasets.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.