Job Search and Career Advice Platform

Enable job alerts via email!

Senior Data Engineer

Verition Fund Management LLC

Greater London

On-site

GBP 75,000 - 100,000

Full time

Today
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A multi-strategy hedge fund is seeking a Senior Data Engineer to build data pipelines on the AWS platform. The ideal candidate will have over 10 years in data engineering, with extensive experience in Java development and cloud services. Responsibilities include working with data vendors, designing data infrastructure, and mentoring junior team members. It's crucial to have strong communication skills and the ability to thrive under tight deadlines. This is a pivotal role in supporting data-driven decision-making within the organization.

Qualifications

  • 10+ years of experience in a similar role.
  • 5+ years of hands-on Java development experience.
  • Prior buy-side experience is strongly preferred.

Responsibilities

  • Building data pipelines in AWS platform.
  • Working closely with data vendors.
  • Normalizing and standardizing vendor data.

Skills

AWS cloud platforms
Java development
Data pipeline engineering
NoSQL and SQL databases
Communication skills

Education

Bachelor’s degree in computer science, Engineering, Mathematics, or related field

Tools

AWS tools (S3, Lambda, Event Bridge, etc.)
Terraform
Python
ETL tools (Python-based)
Job description

Verition Fund Management LLC (“Verition”) is a multi-strategy, multi-manager hedge fund founded in 2008. Verition focuses on global investment strategies including Global Credit, Global Convertible, Volatility & Capital Structure Arbitrage, Event-Driven Investing, Equity Long/Short & Capital Markets Trading, and Global Quantitative Trading.

We are seeking a Senior Data Engineer with advanced data engineering experience in cloud data platforms to join our technology team. We are searching for an engineer with experience building pipelines in AWS to augment our team that operates a cloud platform for structured data, market data, security master, streaming data, alternative data that may be acquired from APIs, files, scrapes, websites, internal databases etc. Experience and knowledge of pipeline building, supporting large data sets, historical data, data monitoring, data validation, support, operations, request management, interaction with clients such as Technology, Operations and Portfolio managers will be required.

Responsibilities:
  • Building Data pipelines in AWS platform utilizing existing tools like Cron, Glue, Eventbridge, Python based ETL, AWS Redshift
  • Working closely with data vendors such as Bloomberg, Refinitiv, Exchanges, Spiderrock, Socgen etc.
  • Normalizing/standardizing vendor data, firm data for firm consumption
  • Help Support and Expand Platform Capabilities from basic daily/historical processing to products, private data storage.
  • Coordinate with Internal teams on delivery, access, requests, support
  • Promote Data Engineering best practices and mentor junior team members Conduct architectural and design reviews and establish best practices for data engineering, mentor and guide data engineers and contribute to hiring and technical development of the global team.
Required Skills and Qualifications:
  • Bachelor’s degree in computer science, Engineering, Mathematics or related field
  • 10+ experience in a similar role
  • 5+ years of hands‑on Java development experience (Java 11 or higher preferred)
  • Prior buy side experience is strongly preferred (Multi‑Strat/Hedge Funds)
  • Capital markets experience is necessary with good working knowledge of reference data across asset classes and experience with trading systems
  • Prior experience in low‑latency ingestion pipelines, in‑memory and persistent storage layers
  • Deep AWS cloud experience with common services such as S3, lambda, cron, Event Bridge, etc.
  • Proven experience in designing and deploying data infrastructure at scale preferably in a financial services firm or hedge fund.
  • Experience designing and deploying disaster recovery and high availability strategies in cloud environments.
  • Strong hands‑on skills with NoSQL and SQL databases, programming in Python and data pipeline and analytics tools
  • Familiarity with time series data and common market data sources (Bloomberg, Refinitiv etc.)
  • Familiar with modern Dev Ops practices and infrastructure‑as‑code tools (e.g. Terraform, CloudFormation)
  • Familiarity working with fast moving data is a preferred
  • Exposure to Chronicle libraries, Reactive Streams, or Disruptor pattern is a plus
  • Expert knowledge of Java concurrency, NIO, JVM tuning, and lock‑free data structures.
  • Excellent communication skills to work with stakeholders across technology, investment, and operations teams.
  • Ability to work in a fast‑paced environment with tight deadlines.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.