Enable job alerts via email!

Data Engineer

Morgan Spencer

City Of London

On-site

GBP 60,000 - 80,000

Full time

18 days ago

Job summary

A leading UK rail software company is seeking a Data Engineer to design data pipelines and support analytics within the rail industry. You will collaborate with a technical team and engage in agile practices. Ideal candidates have experience with ETL tools, data APIs, and cloud platforms like AWS. Curiosity about the rail domain is valued. Competitive salary with potential equity offered.

Qualifications

  • Experience with building ETL pipelines using tools like Kafka or dbt.
  • Familiarity with cloud data platforms such as AWS Redshift or Azure Synapse.
  • Ability to work with both structured and unstructured data at scale.

Responsibilities

  • Design and implement robust data pipelines for rail-related datasets.
  • Develop APIs and services to support analytics and reporting tools.
  • Participate in agile delivery practices including sprint planning.

Skills

Building ETL/ELT pipelines
Backend development in Python
Working with structured and unstructured data
Collaboration with cross-functional teams
Cloud data platforms (AWS, Azure)
SQL and database design
Job description

Salary: Competitive, negotiable with possible equity in the medium term

Overview

This rail software and consulting company works with leading organisations across the UK rail industry, helping them harness data to solve complex operational challenges. Data Engineers are key to this mission - building robust data infrastructure and tooling that powers insights, analytics, and software products used across the rail network.

The Role

As a Data Engineer, you'll be part of a collaborative technical team, working across the data lifecycle: from designing ETL pipelines and integrating real-time data streams, to developing APIs and backend systems that deliver rail data securely and reliably. You'll work closely with engineers, consultants, and project managers to translate real-world rail problems into scalable technical solutions. This role sits at the intersection of software engineering, data architecture, and delivery.

Responsibilities
  • Data Engineering & Infrastructure
    • Design and implement robust data pipelines (batch and real-time) for ingesting, transforming, and serving rail-related datasets.
    • Develop and maintain data APIs and services to support analytics, software features, and reporting tools.
    • Build data models and storage solutions that balance performance, cost, and scalability.
    • Contribute to codebases using modern data stack technologies and cloud platforms (e.g., Azure, AWS).
  • Collaborative Delivery
    • Work with domain consultants and delivery leads to understand client needs and define data solutions.
    • Participate in agile delivery practices, including sprint planning, reviews, and retrospectives.
    • Help shape end-to-end solutions — from ingestion and transformation to client-facing features and reporting.
  • Best Practices & Growth
    • Write clean, well-documented, and tested code following engineering standards.
    • Participate in design reviews, code reviews, and collaborative development sessions.
    • Stay up-to-date with new tools and trends in the data engineering space.
    • Contribute to internal learning sessions, tech talks, and shared documentation.
Qualifications
  • You might be a good fit if you have experience with:
  • Building ETL/ELT pipelines using tools like Kafka, dbt, or custom frameworks.
  • Working with structured and unstructured data at scale.
  • Backend development in Python (or similar), and familiarity with data APIs.
  • Cloud data platforms (e.g., AWS Redshift, Azure Synapse).
  • SQL and database design for analytics, reporting, and product use.
  • Agile collaboration with cross-functional teams.
  • You don’t need experience in rail — just curiosity and a willingness to learn the domain.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.