Enable job alerts via email!

Senior Data Engineer (Databricks, Scala, Spark) – Contract

La Fosse Associates

London

Hybrid

GBP 60,000 - 100,000

Full time

Yesterday
Be an early applicant

Boost your interview chances

Create a job specific, tailored resume for higher success rate.

Job summary

An established industry player is seeking a Senior Data Engineer to enhance their data platform within the financial services sector. This role focuses on building and maintaining data lakehouses, ensuring seamless integration of data sources, and collaborating with business users in Capital Markets. Ideal candidates will possess expertise in Databricks, Scala, and Python, along with a deep understanding of Spark architecture. Join a dynamic team where your contributions will directly impact business outcomes in a rapidly evolving industry. If you are passionate about data engineering and thrive in a collaborative environment, this opportunity is perfect for you.

Qualifications

  • Strong track record in building and maintaining data lakehouses.
  • Experience in banking or capital markets domains.

Responsibilities

  • Design, develop, and maintain scalable data pipelines.
  • Collaborate with Traders and Quants to deliver data solutions.

Skills

Databricks
Scala
Python
Spark architecture
ETL design
Data modelling
Agile delivery
Trading knowledge

Job description

Location: Central London (2–3 days/week onsite)
Type: Contract – 6 months +

Outside IR35

We’re looking for a Senior Data Engineer with a strong track record in building and maintaining data lakehouses, ideally within banking or capital markets domains to join our global financial services client on a contract basis.

This role supports business users in the Capital Markets space, so knowledge of derivatives, risk, PnL, trade lifecycle, market data, and scenarios is highly desirable.

Key Responsibilities

  • Own and drive components of the data platform through deep domain understanding.
  • Design, develop, and maintain scalable, high-performance data pipelines.
  • Ensure consistent and accurate integration of internal and external data sources.
  • Follow best practices in ETL, data modelling (medallion architecture), and pipeline development.
  • Collaborate with Traders and Quants to deliver data solutions aligned with business needs.
  • Maintain data quality, troubleshoot issues, and optimize platform performance.
  • Produce clear documentation for pipeline design and processes.

Required Skills & Experience

  • Expertise around Databricks, including unity catalogue (preferably on Azure)
  • Strong programming in Scala and Python
  • Deep understanding of Spark architecture and performance tuning
  • Proven experience with ETL design, data modelling, and database design
  • Strong grasp on the full SDLC and Agile delivery
  • Solid knowledge of trading, derivatives, and risk systems (nice to have)
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.