Job Search and Career Advice Platform

Enable job alerts via email!

Senior Data Engineer

Fynity

Remote

GBP 70,000

Full time

Today
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A digital transformation consultancy is seeking a Data Engineer to deliver innovative solutions for high-profile government clients. The role demands expertise in designing ETL pipelines and hands-on experience with cloud services, primarily AWS. Candidates should demonstrate strong skills in Python, SQL, and experience with big data tools such as Spark and Kafka. This position offers a competitive salary of up to £70,000 and the opportunity to work in a dynamic environment driving digital transformation in various industries.

Qualifications

  • Proven hands-on experience in data engineering projects.
  • Good hands-on experience of designing, implementing, debugging ETL pipeline.
  • Expertise in Python, PySpark and SQL languages.

Responsibilities

  • Design, implement, and debug ETL pipelines to process and manage complex datasets.
  • Leverage big data tools to deliver scalable solutions.
  • Collaborate with stakeholders to ensure data quality and alignment with business goals.

Skills

Data engineering experience
Python
SQL
ETL pipeline design
AWS services
Apache Kafka
Spark
Airflow
Terraform
CI/CD workflows

Tools

AWS Lambda
AWS Glue
AWS Redshift
AWS CloudWatch
GitHub Actions
Job description
Data Engineer – SC Cleared (or Clearable)

Location: Remote / London occasional visits to London

Salary: Up to £70,000

Start Date: ASAP

About the Role

Join a dynamic Digital Transformation Consultancy as a Data Engineer and play a pivotal role in delivering innovative, data-driven solutions for high-profile government clients. You’ll be responsible for designing and implementing robust ETL pipelines, leveraging cutting‑edge big data technologies, and driving excellence in cloud‑based data engineering.

This role offers the opportunity to work with leading technologies, collaborate with data architects and scientists, and make a significant impact in a fast‑paced, challenging environment.

Key Responsibilities
  • Design, implement, and debug ETL pipelines to process and manage complex datasets.
  • Leverage big data tools, including Apache Kafka, Spark, and Airflow, to deliver scalable solutions.
  • Collaborate with stakeholders to ensure data quality and alignment with business goals.
  • Utilize programming expertise in Python, Scala, and SQL for efficient data processing.
  • Build data pipelines using cloud‑native services on AWS, including Lambda, Glue, Redshift, and API Gateway.
  • Monitor and optimise data solutions using AWS CloudWatch and other tools.
What We’re Looking For
  • Proven hands‑on experience in data engineering projects
  • Good hands‑on experience of designing, implementing, debugging ETL pipeline
  • Expertise in Python, PySpark and SQL languages
  • Expertise with Spark and Airflow
  • Experience of designing data pipelines using cloud native services on AWS
  • Extensive knowledge of AWS services like API Gateway, Lambda, Redshift, Glue, Cloudwatch, etc.
  • Iac experience of deploying AWS resources using terraform
  • Hands‑on experience of setting up CI / CD workflows using GitHub Actions
SC Clearance Criteria
  • Must be a British Citizen or have resided in the UK for at least 5 consecutive years.
  • Detailed employment history for the past 10 years or longer may be required.
Why Join Us?
  • Be part of a forward‑thinking consultancy driving digital transformation for industry leaders.
  • Work with the latest big data and cloud technologies.
  • Collaborate with a team of skilled professionals in a fast‑paced and rewarding environment.

If you’re passionate about delivering impactful data solutions and meet the criteria for this role, we’d love to hear from you. Apply today and lead the way in digital transformation!

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.