Job Search and Career Advice Platform

Enable job alerts via email!

Senior Data Engineer

Pantheon

City Of London

On-site

GBP 70,000 - 90,000

Full time

2 days ago
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A leading private markets investment firm is looking for a passionate Senior Data Engineer to design and implement data pipelines using Azure technologies. This role will be integral to building a new best-in-class Data Platform, utilizing skills in Python, PySpark, SQL, and Azure Data Factory. Candidates should have 5+ years of experience in data engineering roles and a strong background in Agile development. Join a small, skilled team focused on delivering high-quality data solutions.

Benefits

Diverse and inclusive workforce mindset

Qualifications

  • Demonstrable experience in designing and operating production-grade data platforms.
  • Excellent communication skills for technical and non-technical interactions.
  • Experience in Agile delivery environments.

Responsibilities

  • Design and implement data pipelines using Azure technologies.
  • Collaborate with business stakeholders to translate data requirements.
  • Ensure data quality and implement CI/CD practices.

Skills

Python
PySpark
SQL
Azure Data Factory
Azure DevOps
Data modelling
Data quality
Agile SW development

Education

5+ years in data engineering

Tools

Azure Databricks
dbt
Power BI
Job description

Pantheon has been at the forefront of private markets investing for more than 40 years, earning a reputation for providing innovative solutions covering the full lifecycle of investments, from primary fund commitments to co‑investments and secondary purchases, across private equity, real assets and private credit.

We have partnered with more than 650 clients, including institutional investors of all sizes as well as a growing number of private wealth advisers and investors, with approximately $65 bn in discretionary assets under management (as of December 31, 2023).

Leveraging our specialized experience and global team of professionals across Europe, the Americas and Asia, we invest with purpose and lead with expertise to build secure financial futures.

Pantheon is undergoing a multi‑year program to build out a new best‑in‑class Data Platform using cloud‑native technologies hosted in Azure. We require an experienced and passionate hands‑on Senior Data Engineer to design and implement new data pipelines for adaptation to business and or technology changes. This role will be integral to the success of this program and to establishing Pantheon as a data‑centric organization.

You will be working with a modern Azure tech stack and proven experience of ingesting and transforming data from a variety of internal and external systems is core to the role.

You will be part of a small and highly skilled team, and you will need to be passionate about providing best‑in‑class solutions to our global user base.

Key Responsibilities
  • Design, build, and maintain scalable, secure, and high‑performance data pipelines on Azure, primarily using Azure Databricks, Azure Data Factory, and Azure Functions.
  • Develop and optimise batch and streaming data processing solutions using PySpark and SQL to support analytics, reporting, and downstream data products.
  • Implement robust data transformation layers using dbt, ensuring well‑structured, tested, and documented analytical models.
  • Collaborate closely with business analysts, QA teams, and business stakeholders to translate data requirements into reliable technical solutions.
  • Ensure data quality, reliability, and observability through automated testing, monitoring, logging, and alerting.
  • Lead on performance tuning, cost optimisation, and capacity planning across Databricks and associated Azure services.
  • Implement and maintain CI/CD pipelines using Azure DevOps, promoting best practices for version control, automated testing, and deployment.
  • Enforce data governance, security, and compliance standards, including access controls, data lineage, and auditability.
  • Contribute to architectural decisions and provide technical leadership, mentoring junior engineers and setting engineering standards.
  • Produce clear technical documentation and contribute to knowledge sharing across the data engineering function.
Knowledge & Experience Required
Essential Technical Skills
  • Python and PySpark for large‑scale data processing.
  • SQL (advanced querying, optimisation, and data modelling).
  • Azure Data Factory (pipeline orchestration and integration).
  • Azure DevOps (Git, CI/CD pipelines, release management).
  • Azure Functions / serverless data processing patterns.
  • Data modelling (star schemas, data vault, or lakehouse‑aligned approaches).
  • Data quality, testing frameworks, and monitoring/observability.
  • Strong problem‑solving ability and a pragmatic, engineering‑led mindset.
  • Experience in Agile SW development environment.
  • Excellent communication skills, with the ability to explain complex technical concepts to both technical and non‑technical stakeholders.
  • Leadership and mentoring capability, with a focus on raising engineering standards and best practices.
  • Significant commercial experience (typically 5+ years) in data engineering roles, with demonstrable experience designing and operating production‑grade data platforms.
  • Strong hands‑on experience with Azure Databricks, including cluster configuration, job orchestration, and performance optimisation.
  • Proven experience building data pipelines with Databricks and Azure Data Factory; integrating with Azure‑native services (e.g., Data Lake Storage Gen2, Azure Functions).
  • Advanced experience with Python for data engineering, including PySpark for distributed data processing.
  • Strong SQL expertise, with experience designing and optimising complex analytical queries and data models.
  • Practical experience using dbt in a production environment, including model design, testing, documentation, and deployment.
  • Experience implementing CI/CD pipelines using Azure DevOps or equivalent tooling.
  • Solid understanding of data warehousing and lakehouse architectures, including dimensional modelling and modern analytics patterns.
  • Experience working in agile delivery environments and collaborating with cross‑functional teams.
  • Exposure to cloud security, data governance, and compliance concepts within Azure.
Desired Experience
  • Power BI and DAX
  • Business Objects Reporting

This job description is not to be construed as an exhaustive statement of duties, responsibilities, or requirements. You may be required to perform other job‑related duties as reasonably requested by your manager.

Pantheon is an Equal Opportunities employer; we are committed to building a diverse and inclusive workforce so if you're excited about this role but your past experience doesn't perfectly align we'd still encourage you to apply.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.