Enable job alerts via email!

AWS Data Engineer - Permanent

Jefferson Frank

City Of London

Hybrid

GBP 50,000 - 70,000

Full time

Today
Be an early applicant

Job summary

A data consulting firm is seeking a Data Engineer to design and maintain robust data pipelines and models in a hybrid role based in London. Candidates should have at least 3 years of experience, proficiency in SQL and Python, and a strong background in AWS services. The role involves building ETL pipelines, ensuring data quality, and collaborating with various teams to provide accessible data for analytics.

Qualifications

  • 3+ years in a data engineering role.
  • Proficient in SQL and Python.
  • Strong experience with AWS services (e.g., Lambda, Glue, Redshift, S3).
  • Solid understanding of data warehousing and modelling: star/snowflake schema.

Responsibilities

  • Build and maintain scalable ETL/ELT pipelines.
  • Design data models and schemas to support analytics and reporting.
  • Integrate data from APIs, internal systems, and streaming sources.
  • Monitor and ensure data quality and availability.
  • Collaborate with analysts, engineers, and stakeholders to deliver clean datasets.

Skills

Data engineering experience
SQL proficiency
Python proficiency
Experience with AWS services
Data warehousing knowledge
Familiarity with Git
CI/CD pipeline experience
Docker familiarity
Troubleshooting BI connections
Job description

Data Engineer - Hybrid (London-Based) – Full-Time

Role Overview

As a Data Engineer, you'll be responsible for designing and maintaining robust data pipelines and models that support analytics and reporting. You'll work with diverse datasets and collaborate with cross-functional teams to ensure data is accurate, accessible, and actionable.

Key Responsibilities
  • Build and maintain scalable ETL/ELT pipelines.
  • Design data models and schemas to support analytics and reporting.
  • Integrate data from APIs, internal systems, and streaming sources.
  • Monitor and ensure data quality and availability.
  • Collaborate with analysts, engineers, and stakeholders to deliver clean datasets.
  • Optimise data architecture for performance and reliability.
  • Share best practices and contribute to team knowledge.
Required Skills
  • 3+ years in a data engineering role.
  • Proficient in SQL and Python.
  • Strong experience with AWS services (e.g., Lambda, Glue, Redshift, S3).
  • Solid understanding of data warehousing and modelling: star/snowflake schema.
  • Familiarity with Git, CI/CD pipelines, and containerisation (e.g., Docker).
  • Ability to troubleshoot BI tool connections (e.g., Power BI).
Desirable Skills
  • Experience with Infrastructure as Code (e.g., CloudFormation).

Please send me a copy of your CV if you meet the requirements

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.