Enable job alerts via email!

Senior Databricks Engineer CGEMJP00319492

Experis

Glasgow

Hybrid

GBP 125,000 - 150,000

Full time

Today
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A leading recruitment firm is seeking a Senior Databricks Engineer in Glasgow to lead data pipeline migrations from AWS to Databricks. Responsibilities include designing and optimizing data engineering solutions while ensuring data quality and collaborating with cross-functional teams. Strong hands-on experience with Databricks, Apache Spark, and AWS services is essential. This contract role offers hybrid work 2-3 days per week onsite.

Qualifications

  • Strong hands‑on experience with Databricks and Apache Spark.
  • Proven ability to build and optimize data pipelines in cloud environments.
  • Familiarity with data governance and security best practices.

Responsibilities

  • Lead the migration of data pipelines from AWS to Databricks.
  • Design scalable data engineering solutions using Apache Spark.
  • Collaborate with teams to translate data requirements into pipelines.
  • Optimize Databricks workloads for performance and cost-efficiency.
  • Develop CI/CD workflows for Databricks.
  • Ensure data quality through robust testing and validation.

Skills

Databricks expertise
Apache Spark (PySpark)
AWS services experience
Python proficiency
GitLab familiarity
Data validation techniques
Job description
Role Title

Senior Databricks Engineer

Duration

contract to run until 31/12/2026

Location

Glasgow, hybrid 2/3 days per week onsite

Rate

up to £414 p/d Umbrella inside IR35

Role purpose / summary

We are currently migrating our data pipelines from AWS to Databricks, and are seeking a Senior Databricks Engineer to lead and contribute to this transformation. This is a hands‑on engineering role focused on designing, building, and optimizing scalable data solutions using the Databricks platform.

Key Skills/ requirements
  • Lead the migration of existing AWS-based data pipelines to Databricks.
  • Design and implement scalable data engineering solutions using Apache Spark on Databricks.
  • Collaborate with cross-functional teams to understand data requirements and translate them into efficient pipelines.
  • Optimize performance and cost‑efficiency of Databricks workloads.
  • Develop and maintain CI/CD workflows for Databricks using GitLab or similar tools.
  • Ensure data quality and reliability through robust unit testing and validation frameworks.
  • Implement best practices for data governance, security, and access control within Databricks.
  • Provide technical mentorship and guidance to junior engineers.
Must‑Have Skills
  • Strong hands‑on experience with Databricks and Apache Spark (preferably PySpark).
  • Proven track record of building and optimizing data pipelines in cloud environments.
  • Experience with AWS services such as S3, Glue, Lambda, Step Functions, Athena, IAM, and VPC.
  • Proficiency in Python for data engineering tasks.
  • Familiarity with GitLab for version control and CI/CD.
  • Strong understanding of unit testing and data validation techniques.
Preferred Qualifications
  • Experience with Databricks Delta Lake, Unity Catalog, and MLflow.
  • Knowledge of CloudFormation or other infrastructure‑as‑code tools.
  • AWS or Databricks certifications.
  • Experience in large‑scale data migration projects.
  • Background in Finance Industry.

All profiles will be reviewed against the required skills and experience. Due to the high number of applications we will only be able to respond to successful applicants in the first instance. We thank you for your interest and the time taken to apply!

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.