Job Search and Career Advice Platform

Enable job alerts via email!

Senior Data Engineer

Fynity

Remote

GBP 59,000 - 70,000

Full time

Today
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A dynamic Digital Transformation Consultancy is seeking a Data Engineer to deliver data-driven solutions for government clients. This role involves designing ETL pipelines using AWS services, collaborating with data experts, and utilizing big data technologies. The ideal candidate should have expertise in data engineering, Python, Spark, and SQL. SC clearance is required, and the position allows for remote work with occasional visits to London. Apply today to be part of a forward-thinking consultancy!

Qualifications

  • Proven hands-on experience in data engineering projects.
  • Expertise in Python, PySpark, and SQL languages.
  • Experience of designing data pipelines using cloud-native services on AWS.

Responsibilities

  • Design, implement, and debug ETL pipelines to manage complex datasets.
  • Leverage big data tools to deliver scalable solutions.
  • Collaborate with stakeholders to ensure data quality.

Skills

Data engineering
Python
SQL
Apache Kafka
Spark
Airflow
AWS
Terraform
GitHub Actions
Job description
Data Engineer – SC Cleared (or Clearable)

Location: Remote / London occasional visits to London

Salary: Up to £70,000

Start Date: ASAP

About the Role

Join a dynamic Digital Transformation Consultancy as a Data Engineer and play a pivotal role in delivering innovative, data-driven solutions for high-profile government clients. You’ll be responsible for designing and implementing robust ETL pipelines, leveraging cutting‑edge big data technologies, and driving excellence in cloud‑based data engineering.

This role offers the opportunity to work with leading technologies, collaborate with data architects and scientists, and make a significant impact in a fast‑paced, challenging environment.

Key Responsibilities
  • Design, implement, and debug ETL pipelines to process and manage complex datasets.
  • Leverage big data tools, including Apache Kafka, Spark, and Airflow, to deliver scalable solutions.
  • Collaborate with stakeholders to ensure data quality and alignment with business goals.
  • Utilize programming expertise in Python, Scala, and SQL for efficient data processing.
  • Build data pipelines using cloud‑native services on AWS, including Lambda, Glue, Redshift, and API Gateway.
  • Monitor and optimise data solutions using AWS CloudWatch and other tools.
What We’re Looking For
  • Proven hands‑on experience in data engineering projects
  • Good hands‑on experience of designing, implementing, debugging ETL pipeline
  • Expertise in Python, PySpark and SQL languages
  • Expertise with Spark and Airflow
  • Experience of designing data pipelines using cloud native services on AWS
  • Extensive knowledge of AWS services like API Gateway, Lambda, Redshift, Glue, Cloudwatch, etc.
  • Iac experience of deploying AWS resources using terraform
  • Hands‑on experience of setting up CI / CD workflows using GitHub Actions
SC Clearance Criteria
  • Must be a British Citizen or have resided in the UK for at least 5 consecutive years.
  • Detailed employment history for the past 10 years or longer may be required.
Why Join Us?
  • Be part of a forward‑thinking consultancy driving digital transformation for industry leaders.
  • Work with the latest big data and cloud technologies.
  • Collaborate with a team of skilled professionals in a fast‑paced and rewarding environment.

If you’re passionate about delivering impactful data solutions and meet the criteria for this role, we’d love to hear from you. Apply today and lead the way in digital transformation!

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.