Job Search and Career Advice Platform

Enable job alerts via email!

Data Engineer

Stott and May

Reading

Hybrid

GBP 60,000 - 80,000

Full time

2 days ago
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A leading recruitment agency in the UK seeks a Senior Data Engineer to join the Data & Analytics team. This role, requiring hybrid working with 2-3 days in-office, focuses on designing and maintaining production-grade data products using Databricks. Candidates should have 6+ years in data engineering, with strong Python and SQL skills. The successful applicant will collaborate closely with various teams to meet business requirements and ensure data governance standards are maintained.

Qualifications

  • 6+ years in data engineering or advanced analytics engineering roles.
  • Strong hands-on expertise in Python and SQL.
  • Proven experience building production pipelines in Databricks.

Responsibilities

  • Design, build, and maintain pipelines in Databricks using Delta Lake.
  • Implement medallion architectures and deliver data products.
  • Ensure pipelines meet non-functional requirements.

Skills

Python
SQL
Databricks
Data governance
Data quality
Job description

Job Title: Senior Data Engineer

Location: UK (Hybrid, 2–3 days per week in-office)

Rate: £446/day (Inside IR35)

Contract Duration: 6 months

Additional Requirements: May require occasional travel to Dublin office

Overview

We are looking for an experienced Senior Data Engineer to join a Data & Analytics (DnA) team. You will design, build, and operate production-grade data products across customer, commercial, financial, sales, and broader data domains. This role is hands-on and heavily focused on Databricks-based engineering, data quality, governance, and DevOps-aligned delivery.

You will work closely with the Data Engineering Manager, Product Owner, Data Product Manager, Data Scientists, Head of Data & Analytics, and IT teams to transform business requirements into governed, decision-grade datasets embedded in business processes and trusted for reporting, analytics, and advanced use cases.

Responsibilities
  • Design, build, and maintain pipelines in Databricks using Delta Lake and Delta Live Tables.
  • Implement medallion architectures (Bronze/Silver/Gold) and deliver reusable, discoverable data products.
  • Ensure pipelines meet non-functional requirements such as freshness, latency, completeness, scalability, and cost-efficiency.
  • Own and operate Databricks assets including jobs, notebooks, SQL, and Unity Catalog objects.
  • Apply Git-based DevOps practices, CI/CD, and Databricks Asset Bundles to safely promote changes across environments.
  • Implement monitoring, alerting, runbooks, incident response, and root-cause analysis.
  • Enforce governance and security using Unity Catalog (lineage, classification, ACLs, row/column-level security).
  • Define and maintain data-quality rules, expectations, and SLOs within pipelines.
  • Support root-cause analysis of data anomalies and production issues.
  • Partner with Product Owner, Product Manager, and business stakeholders to translate requirements into functional and non-functional delivery scope.
  • Collaborate with IT platform teams to define data contracts, SLAs, and schema evolution strategies.
  • Produce clear technical documentation (data contracts, source-to-target mappings, release notes).
Qualifications
  • 6+ years in data engineering or advanced analytics engineering roles.
  • Strong hands-on expertise in Python and SQL.
  • Proven experience building production pipelines in Databricks.
  • Excellent attention to detail, with the ability to create effective documentation and process diagrams.
  • Solid understanding of data modelling, performance tuning, and cost optimisation.
Desirable Skills
  • Hands-on experience with Databricks Lakehouse, including Delta Lake and Delta Live Tables for batch/stream pipelines.
  • Knowledge of pipeline health monitoring, SLA/SLO management, and incident response.
  • Unity Catalog governance and security expertise, including lineage, table ACLs, and row/column-level security.
  • Familiarity with Databricks DevOps/DataOps practices (Git-based development, CI/CD, automated testing).
  • Performance and cost optimization strategies for Databricks (autoscaling, Photon/serverless, partitioning, Z-Ordering, OPTIMIZE/VACUUM).
  • Semantic layer and metrics engineering experience for consistent business metrics and self-service analytics.
  • Experience with cloud-native analytics platforms (preferably Azure) operating as enterprise-grade production services.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.