Enable job alerts via email!

Data Engineer - AWS, Databricks & Pyspark

Our Client

City Of London

Hybrid

GBP 80,000 - 100,000

Full time

Yesterday
Be an early applicant

Job summary

A leading technology firm is seeking a Data Engineer specializing in AWS, Databricks, and PySpark for a hybrid contract role. The successful candidate will maintain and enhance a cloud-based data platform and optimize ETL pipelines. Strong collaboration skills are key as you will work with analysts and stakeholders to deliver high-quality datasets. This role offers a competitive rate of £350 per day outside IR35 for a duration of 6 months.

Qualifications

  • Experience in data engineering with a focus on cloud platforms.
  • Proven ability to maintain and optimize ETL pipelines.
  • Familiarity with data governance and CI/CD practices.

Responsibilities

  • Maintain and optimise existing ETL pipelines to support reporting and analytics.
  • Assist with improvements to performance and cost-efficiency.
  • Collaborate with analysts and stakeholders to deliver usable datasets.

Skills

AWS
Databricks
PySpark
ETL
CI/CD

Tools

Git
DevOps

Job description

Data Engineer - AWS, Databricks & Pyspark

Contract Role - Data Engineer
Location: Hybrid (1 day per month onsite in Harrow, London)
Rate: £350 per day (Outside IR35)
Duration: 6 months

A client of mine is looking for a Data Engineer to help maintain and enhance their existing cloud-based data platform. The core migration to a Databricks Delta Lakehouse on AWS has already been completed, so the focus will be on improving pipeline performance, supporting analytics, and contributing to ongoing platform development.

Key Responsibilities:
- Maintain and optimise existing ETL pipelines to support reporting and analytics

- Assist with improvements to performance, scalability, and cost-efficiency across the platform

- Work within the existing Databricks environment to develop new data solutions as required

- Collaborate with analysts, data scientists, and business stakeholders to deliver clean, usable datasets

- Contribute to good data governance, CI/CD workflows, and engineering standards

- Continue developing your skills in PySpark, Databricks, and AWS-based tools

Tech Stack Includes:
- Databricks (Delta Lake, PySpark)
- AWS
- CI/CD tooling (Git, DevOps pipeline
- Cloud-based data warehousing and analytics tools

If your a mid to snr level Data Engineer feel free to apply or send your C.V

Data Engineer - AWS, Databricks & Pyspark

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.