Job Search and Career Advice Platform

Enable job alerts via email!

Senior Data Engineer

Fynity

Remote

GBP 59,000 - 70,000

Full time

Today
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A dynamic Digital Transformation Consultancy is seeking a Data Engineer to design and implement robust ETL pipelines for high-profile government projects. The role requires expertise in Python, Spark, and AWS services, along with hands-on experience in data engineering. Candidates must meet SC clearance criteria, being a British citizen or resident for at least 5 years. Join a forward-thinking team driving digital transformations while working with cutting-edge technologies in a fast-paced environment.

Qualifications

  • Proven hands-on experience in data engineering projects.
  • Good hands-on experience of designing, implementing, debugging ETL pipelines.
  • Extensive knowledge of AWS services.

Responsibilities

  • Design, implement, and debug ETL pipelines to process and manage complex datasets.
  • Leverage big data tools, including Apache Kafka, Spark, and Airflow.
  • Collaborate with stakeholders to ensure data quality.

Skills

Python
SQL
ETL pipeline design and implementation
AWS services (Lambda, Redshift, Glue)
Apache Spark
Apache Kafka
Airflow
Terraform
CI/CD workflows
Job description
Data Engineer – SC Cleared (or Clearable)

Location: Remote / London occasional visits to London

Salary: Up to £70,000

Start Date: ASAP

About the Role

Join a dynamic Digital Transformation Consultancy as a Data Engineer and play a pivotal role in delivering innovative, data-driven solutions for high-profile government clients. You’ll be responsible for designing and implementing robust ETL pipelines, leveraging cutting‑edge big data technologies, and driving excellence in cloud‑based data engineering.

This role offers the opportunity to work with leading technologies, collaborate with data architects and scientists, and make a significant impact in a fast‑paced, challenging environment.

Key Responsibilities
  • Design, implement, and debug ETL pipelines to process and manage complex datasets.
  • Leverage big data tools, including Apache Kafka, Spark, and Airflow, to deliver scalable solutions.
  • Collaborate with stakeholders to ensure data quality and alignment with business goals.
  • Utilize programming expertise in Python, Scala, and SQL for efficient data processing.
  • Build data pipelines using cloud‑native services on AWS, including Lambda, Glue, Redshift, and API Gateway.
  • Monitor and optimise data solutions using AWS CloudWatch and other tools.
What We’re Looking For
  • Proven hands‑on experience in data engineering projects
  • Good hands‑on experience of designing, implementing, debugging ETL pipeline
  • Expertise in Python, PySpark and SQL languages
  • Expertise with Spark and Airflow
  • Experience of designing data pipelines using cloud native services on AWS
  • Extensive knowledge of AWS services like API Gateway, Lambda, Redshift, Glue, Cloudwatch, etc.
  • Iac experience of deploying AWS resources using terraform
  • Hands‑on experience of setting up CI / CD workflows using GitHub Actions
SC Clearance Criteria
  • Must be a British Citizen or have resided in the UK for at least 5 consecutive years.
  • Detailed employment history for the past 10 years or longer may be required.
Why Join Us?
  • Be part of a forward‑thinking consultancy driving digital transformation for industry leaders.
  • Work with the latest big data and cloud technologies.
  • Collaborate with a team of skilled professionals in a fast‑paced and rewarding environment.

If you’re passionate about delivering impactful data solutions and meet the criteria for this role, we’d love to hear from you. Apply today and lead the way in digital transformation!

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.