Job Search and Career Advice Platform

Enable job alerts via email!

Data Engineer

Morgan Spencer

Greater London

On-site

GBP 60,000 - 80,000

Full time

Yesterday
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A leading rail software and consulting firm is seeking a Data Engineer to join their collaborative technical team in Greater London. You will work across the data lifecycle, creating ETL pipelines and integrating real-time data streams while developing secure APIs. This role emphasizes teamwork with engineers and consultants, translating operational challenges into scalable solutions. Candidates should have experience with ETL tools, backend development, and cloud data platforms. Curiosity and a willingness to learn the rail domain are essential.

Qualifications

  • Experience with building ETL/ELT pipelines using tools like Kafka.
  • Familiarity with cloud data platforms such as AWS Redshift and Azure Synapse.
  • Ability to work with both structured and unstructured data at scale.

Responsibilities

  • Design and implement robust data pipelines for rail-related datasets.
  • Develop and maintain data APIs and services for analytics and reporting.
  • Participate in agile delivery practices including sprint planning and retrospectives.

Skills

Building ETL/ELT pipelines using tools like Kafka
Working with structured and unstructured data at scale
Backend development in Python
Familiarity with data APIs
Cloud data platforms (e.g., AWS Redshift, Azure Synapse)
SQL and database design for analytics
Agile collaboration with cross-functional teams
Job description

Salary: Competitive, negotiable with possible equity in the medium term

Company

This business is a rail software and consulting company with a growing team and a solid foundation of project‑based revenue. It works with leading organisations across the UK rail industry, helping them harness data to solve complex operational challenges.

The Role

As a Data Engineer, you’ll be part of a collaborative technical team, working across the data lifecycle: from designing ETL pipelines and integrating real‑time data streams, to developing APIs and backend systems that deliver rail data securely and reliably. You’ll work closely with engineers, consultants, and project managers to translate real‑world rail problems into scalable technical solutions. This role sits at the intersection of software engineering, data architecture, and delivery.

Key Responsibilities
  • Data Engineering & Infrastructure: Design and implement robust data pipelines (batch and real‑time) for ingesting, transforming, and serving rail‑related datasets.
  • Data Engineering & Infrastructure: Develop and maintain data APIs and services to support analytics, software features, and reporting tools.
  • Data Engineering & Infrastructure: Build data models and storage solutions that balance performance, cost, and scalability.
  • Data Engineering & Infrastructure: Contribute to codebases using modern data stack technologies and cloud platforms (e.g., Azure, AWS).
  • Collaborative Delivery: Work with domain consultants and delivery leads to understand client needs and define data solutions.
  • Collaborative Delivery: Participate in agile delivery practices, including sprint planning, reviews, and retrospectives.
  • Collaborative Delivery: Help shape end‑to‑end solutions — from ingestion and transformation to client‑facing features and reporting.
  • Best Practices & Growth: Write clean, well‑documented, and tested code following engineering standards.
  • Best Practices & Growth: Participate in design reviews, code reviews, and collaborative development sessions.
  • Best Practices & Growth: Stay up‑to‑date with new tools and trends in the data engineering space.
  • Best Practices & Growth: Contribute to internal learning sessions, tech talks, and shared documentation.
The Candidate

You might be a good fit if you have experience with:

  • Building ETL/ELT pipelines using tools like Kafka, dbt, or custom frameworks.
  • Working with structured and unstructured data at scale.
  • Backend development in Python (or similar), and familiarity with data APIs.
  • Cloud data platforms (e.g., AWS Redshift, Azure Synapse).
  • SQL and database design for analytics, reporting, and product use.
  • Agile collaboration with cross‑functional teams.

You don’t need experience in rail — just curiosity and a willingness to learn the domain.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.