Job Search and Career Advice Platform

Enable job alerts via email!

Data Engineer

Norton Blake

Greater London

Hybrid

GBP 60,000 - 80,000

Full time

11 days ago

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A leading UK energy firm is seeking a Data Engineer to build high-performance data systems using open-source technologies. Join a data and trading team shaping the energy future, working on data ingestion pipelines, ETL processes, and real-time analytics. The ideal candidate will have strong skills in SQL and Python, experience with Apache Spark, Kafka, Docker, and a problem-solving mindset. This hybrid role offers flexibility with twice-a-week in-office requirements in London.

Qualifications

  • Strong SQL and Python skills, experience with data processing libraries.
  • Hands-on experience with Apache Spark and distributed computing.
  • Experience with Docker, Kubernetes, and CI/CD pipelines.

Responsibilities

  • Designing and building robust data ingestion pipelines.
  • Creating and optimising ETL / ELT processes.
  • Monitoring, troubleshooting and improving pipeline performance.

Skills

SQL
Python (pandas, NumPy, Spark)
Apache Spark
Kafka
Airflow
Docker
Kubernetes
CI/CD pipelines
Job description

Data Engineer | UK Energy Trading | Open-Source Stack | Circa 80k

My client, an energy leader at the forefront of the UK market, is searching for a Data Engineer to join them as soon as possible. This is an exciting opportunity to join a highly experienced data and trading team helping to shape the future of energy through data-driven insight and technology.

We are looking for a Data Engineer who's passionate about building scalable, high-performance data systems using open-source technologies. You'll play a key role in developing robust pipelines, optimising performance, and supporting real-time analytics across the business.

This will be a hybrid role - ideally twice a week in London, but there is flexibility.

What you’ll be doing
  • Designing and building robust data ingestion pipelines
  • Creating and optimising ETL / ELT processes across varied data volumes
  • Developing data models and schemas to support analytics and product use cases
  • Monitoring, troubleshooting and improving pipeline performance and reliabilityImplementing data quality, validation and monitoring processes
  • Contributing to architecture decisions and the data roadmap
What we’re looking for

A strong engineer who cares about performance, scalability and clean design.

Key experience:

  • Strong SQL and Python skills (pandas, NumPy, Spark)
  • Hands‑on experience with Apache Spark, Kafka, and Airflow
  • Solid understanding of data warehousing and distributed computing
  • Experience with Docker, Kubernetes, and CI/CD pipelines
  • A problem‑solving mindset and the ability to communicate complex ideas clearly

Please apply for more information

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.