Enable job alerts via email!

Senior Data Engineer/ Scientist

Head Resourcing Ltd

Glasgow

On-site

GBP 65,000 - 90,000

Full time

Today
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A leading UK consumer business is looking for a Senior Data Engineer to contribute to a major data transformation project. The role involves engineering scalable ELT pipelines, building data models, and optimising orchestration using Azure and Databricks. Ideal candidates should have over 5 years of Data Engineering experience, especially with Azure and Databricks. This is an onsite role located in Glasgow, with a strong emphasis on data quality and governance.

Qualifications

  • 5-8+ years of Data Engineering with Azure + Databricks.
  • Strong PySpark/Spark SQL expertise.
  • Proven Medallion/Lakehouse delivery experience.

Responsibilities

  • Engineer scalable ELT pipelines using Lakeflow.
  • Build clean, conformed Silver/Gold models.
  • Design and optimise orchestration with Azure Data Factory.

Skills

PySpark
Spark SQL
Data Engineering
CI/CD
Observability

Tools

Azure Data Factory
Databricks
Azure DevOps
Job description
Senior Data Engineer - Azure & Databricks Lakehouse
Glasgow (3/4 days onsite) | Exclusive Role with a Leading UK Consumer Business

A rapidly scaling UK consumer brand is undertaking a major data modernisation programme-moving away from legacy systems, manual Excel reporting and fragmented data sources into a fully automated Azure Enterprise Landing Zone + Databricks Lakehouse. They are building a modern data platform from the ground up using Lakeflow Declarative Pipelines, Unity Catalog, and Azure Data Factory, and this role sits right at the heart of that transformation. This is a rare opportunity to join early, influence architecture, and help define engineering standards, pipelines, curated layers and best practices that will support Operations, Finance, Sales, Logistics and Customer Care. If you want to build a best-in-class Lakehouse from scratch-this is the one.

What You'll Be Doing
Lakehouse Engineering (Azure + Databricks)
  • Engineer scalable ELT pipelines using Lakeflow Declarative Pipelines, PySpark, and Spark SQL across a full Medallion Architecture (Bronze ? Silver ? Gold).
  • Implement ingestion patterns for files, APIs, SaaS platforms (e.g. subscription billing), SQL sources, SharePoint and SFTP using ADF + metadata-driven frameworks.
  • Apply Lakeflow expectations for data quality, schema validation and operational reliability.
Curated Data Layers & Modelling
  • Build clean, conformed Silver/Gold models aligned to enterprise business domains (customers, subscriptions, deliveries, finance, credit, logistics, operations).
  • Deliver star schemas, harmonisation logic, SCDs and business marts to power high-performance Power BI datasets.
  • Apply governance, lineage and fine-grained permissions via Unity Catalog.
Orchestration & Observability
  • Design and optimise orchestration using Lakeflow Workflows and Azure Data Factory.
  • Implement monitoring, alerting, SLAs/SLIs, runbooks and cost-optimisation across the platform.
DevOps & Platform Engineering
  • Build CI/CD pipelines in Azure DevOps for notebooks, Lakeflow pipelines, SQL models and ADF artefacts.
  • Ensure secure, enterprise-grade platform operation across Dev ? Prod, using private endpoints, managed identities and Key Vault.
  • Contribute to platform standards, design patterns, code reviews and future roadmap.
Collaboration & Delivery
  • Work closely with BI/Analytics teams to deliver curated datasets powering dashboards across the organisation.
  • Influence architecture decisions and uplift engineering maturity within a growing data function.
Tech Stack You'll Work With
  • Databricks: Lakeflow Declarative Pipelines, Workflows, Unity Catalog, SQL Warehouses
  • Azure: ADLS Gen2, Data Factory, Key Vault, vNets & Private Endpoints
  • Languages: PySpark, Spark SQL, Python, Git
  • DevOps: Azure DevOps Repos, Pipelines, CI/CD
  • Analytics: Power BI, Fabric
What We're Looking For
Experience
  • 5-8+ years of Data Engineering with 2-3+ years delivering production workloads on Azure + Databricks.
  • Strong PySpark/Spark SQL and distributed data processing expertise.
  • Proven Medallion/Lakehouse delivery experience using Delta Lake.
  • Solid dimensional modelling (Kimball) including surrogate keys, SCD types 1/2, and merge strategies.
  • Operational experience‑SLAs, observability, idempotent pipelines, reprocessing, backfills.
Mindset
  • Strong grounding in secure Azure Landing Zone patterns.
  • Comfort with Git, CI/CD, automated deployments and modern engineering standards.
  • Clear communicator who can translate technical decisions into business outcomes.
Nice to Have
  • Databricks Certified Data Engineer Associate
  • Streaming ingestion experience (Auto Loader, structured streaming, watermarking)
  • Subscription/entitlement modelling experience
  • Advanced Unity Catalog security (RLS, ABAC, PII governance)
  • Terraform/Bicep for IaC
  • Fabric Semantic Model / Direct Lake optimisation
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.