Enable job alerts via email!

Data Engineer

P Corp

City Of London

Hybrid

GBP 60,000 - 80,000

Full time

Today
Be an early applicant

Job summary

A dynamic FinTech company in the City of London is seeking an experienced Data Engineer to build and optimize scalable data pipelines across financial systems. You will focus on integrating diverse sources and ensuring data quality while driving cloud analytics. The ideal candidate has over 5 years of experience in data engineering and proficiency in SQL and ETL frameworks, with roles designed for hybrid or remote working arrangements.

Benefits

Flexible working arrangements
Local insurance coverage
Beautiful office space
Career growth opportunities

Qualifications

  • 5+ years of hands-on data engineering, designing and maintaining production-grade data pipelines.
  • Advanced SQL for data analysis.
  • Proficient in ETL/ELT frameworks.
  • Intermediate programming skills in Python and bash.
  • Experience with data warehouses (PostgreSQL).

Responsibilities

  • Drive end-to-end development of data engineering initiatives.
  • Oversee implementation and optimization of ETL pipelines.
  • Manage cross-functional collaboration for data delivery.
  • Develop data infrastructure roadmaps and resource plans.
  • Lead strategic evolution of the data stack.

Skills

Data engineering experience
Advanced SQL
ETL/ELT frameworks
Python programming
PostgreSQL experience
Data validation and quality tools
Containerization
Real-time streaming and batch architectures

Tools

DLT
SQLMesh
Docker
Kubernetes
GitLab/GitHub Actions
Job description
Overview

About us

Are you passionate about FinTech and ready to make a tangible impact in a dynamic company where your decisions shape the future? Altery could be the next chapter in your professional journey!

We seek an experienced Data Engineer to build and optimize scalable data pipelines across financial, compliance, and business systems. This role will focus on integrating diverse sources, ensuring data quality and performance, and driving our cloud-based analytics infrastructure using tools like DLT, SQLMesh, and Postgres.

Responsibilities
  • Drive end-to-end development of data engineering initiatives across multiple systems including financial services, CRM platforms, and analytics tools, leveraging technologies such as DLT and SQLMesh
  • Oversee the implementation and optimization of scalable ETL pipelines with robust error handling, incremental loading, and schema evolution for destinations like PostgreSQL
  • Manage cross-functional collaboration with product, engineering, compliance, and business intelligence teams to ensure timely and accurate data delivery
  • Develop and maintain data infrastructure roadmaps, resource plans, and risk mitigation strategies aligned with business and regulatory priorities
  • Lead the strategic evolution of our data stack to support real-time processing, advanced analytics, and compliance reporting in the financial services domain
  • Monitor and report on key data pipeline metrics including latency, throughput, and data quality benchmarks
  • Create and maintain detailed technical documentation, including pipeline designs, data contracts, and operational procedures
  • Coordinate with data governance and security teams to ensure adherence to GDPR, KYC/AML, and PCI-DSS standards
  • Collaborate with DevOps to maintain CI/CD pipelines and containerized environments, ensuring smooth deployment and system stability
What You’ll Bring To Us
  • 5+ years of hands-on data engineering, designing and maintaining production-grade data pipelines
  • Advanced SQL for data analysis (window functions, performance tuning)
  • Proficient in ETL/ELT frameworks (e.g., DLT, Airflow, dbt/SQLMesh, Dagster) for ingesting, transforming, and loading data
  • Intermediate programming skills in Python (pandas, pyarrow, REST API integrations) and bash for scheduling and orchestration
  • Experience with data warehouses (PostgreSQL), including partitioning, clustering, and indexing
  • SQLMesh for data processing
  • Strong command of data validation and quality tooling, implementing checks, alerts, and retry logic for high-volume data flows
  • Hands-on with containerized deployments (Docker, Kubernetes) and CI/CD pipelines (GitLab/GitHub Actions) to automate testing and rollout of payment-data services
  • Skilled in real-time streaming and batch architectures, using Kafka or Pub/Sub
What we offer
  • Team and our Product: We are team players and we are passionate about our product and understand what we aim to achieve and the impact it will make.
  • Growth Opportunities: You can Influence and shape our story while advancing your career.
  • Flexibility: We always listen to our people and can be flexible with arrangements.
  • Hybrid or Remote Working: We don’t expect you to be in the office every day.
  • Local Market Perks: Enjoy insurance coverage, local perks, and beautiful offices.
Why to join us

We may not be perfect, but our strength lies in our resilience. Facing challenges with our expertise, positive attitude, and a supportive environment where everyone relies on one another gives us confidence in what we do. We empower our people to make decisions, explore, and experiment - micromanagement isnt our style. We reward those who take on additional responsibilities and go the extra mile.

We are proud of how diverse and unique we are. We thrive on diverse views, love learning from one another, and believe that our differences fuel our curiosity

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.