Enable job alerts via email!
A leading technology firm is seeking a Data Engineer specializing in AWS, Databricks, and PySpark for a hybrid contract role. The successful candidate will maintain and enhance a cloud-based data platform and optimize ETL pipelines. Strong collaboration skills are key as you will work with analysts and stakeholders to deliver high-quality datasets. This role offers a competitive rate of £350 per day outside IR35 for a duration of 6 months.
Data Engineer - AWS, Databricks & Pyspark
Contract Role - Data Engineer
Location: Hybrid (1 day per month onsite in Harrow, London)
Rate: £350 per day (Outside IR35)
Duration: 6 months
A client of mine is looking for a Data Engineer to help maintain and enhance their existing cloud-based data platform. The core migration to a Databricks Delta Lakehouse on AWS has already been completed, so the focus will be on improving pipeline performance, supporting analytics, and contributing to ongoing platform development.
Key Responsibilities:
- Maintain and optimise existing ETL pipelines to support reporting and analytics
- Assist with improvements to performance, scalability, and cost-efficiency across the platform
- Work within the existing Databricks environment to develop new data solutions as required
- Collaborate with analysts, data scientists, and business stakeholders to deliver clean, usable datasets
- Contribute to good data governance, CI/CD workflows, and engineering standards
- Continue developing your skills in PySpark, Databricks, and AWS-based tools
Tech Stack Includes:
- Databricks (Delta Lake, PySpark)
- AWS
- CI/CD tooling (Git, DevOps pipeline
- Cloud-based data warehousing and analytics tools
If your a mid to snr level Data Engineer feel free to apply or send your C.V
Data Engineer - AWS, Databricks & Pyspark