Job Search and Career Advice Platform

Enable job alerts via email!

Senior Data Engineer (Python & SQL)

Datatech Analytics

Greater London

Hybrid

GBP 70,000 - 85,000

Full time

Today
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A leading analytics firm in Greater London is seeking a Senior Data Engineer who excels in Python and SQL to transform high-quality datasets into reliable pipelines. This hybrid role emphasizes data quality and mentorship within the engineering team, addressing real-world data challenges. The ideal candidate should have experience with cloud-based solutions and a collaborative mindset. With competitive remuneration of £70,000 to £85,000, this opportunity allows for significant professional growth and contribution to meaningful projects.

Benefits

Supportive environment
Clear progression opportunities
Collaborative culture

Qualifications

  • Strong experience using Python and SQL to transform large, real world datasets in production environments.
  • Experience working with modern data platforms such as Azure, GCP, AWS, Databricks, or Snowflake.
  • Confidence working with imperfect data and making it fit for consumption.

Responsibilities

  • Designing and building cloud based data and machine learning pipelines.
  • Writing clear, well-structured Python, PySpark, and SQL to transform data.
  • Taking ownership of data quality, consistency, and reliability.

Skills

Python
SQL
Data Quality
Data Pipelines
Mentoring

Tools

Azure
AWS
GCP
Databricks
Snowflake
Job description
Senior Data Engineer (Python & SQL)

Location London with hybrid working Monday to Wednesday in the office

Salary £70,000 to £85,000 depending on experience

Reference J13026

An AI first SaaS business that transforms high quality first party data into trusted, decision ready insight at scale is looking for a Senior Data Engineer to join its growing data and engineering team.

This role sits at the core of data engineering. You will work with data that is often imperfect and transform it into well structured, reliable datasets that other teams can depend on. The focus is on engineering high quality data foundations rather than analytics or cloud infrastructure alone.

You will design and build clear, maintainable data pipelines using Python and SQL within a modern data and AI platform, with a strong focus on data quality, robustness, and long term reliability.

You will also play an important mentoring role within the team, supporting and guiding other data engineers and helping to raise engineering standards through thoughtful, hands on leadership.

Why join
  • A supportive and inclusive environment where different perspectives are welcomed and people are encouraged to contribute and be heard
  • Clear progression with space to deepen your technical expertise and grow your confidence at a sustainable pace
  • A team that values collaboration, good communication, and shared ownership over hero culture
  • The opportunity to work on meaningful data engineering problems where quality genuinely matters
What you will be doing
  • Designing and building cloud based data and machine learning pipelines that prepare data for analytics, AI, and product use
  • Writing clear, well-structured Python, PySpark, and SQL to transform and validate data from multiple upstream sources
  • Taking ownership of data quality, consistency, and reliability across the pipeline lifecycle
  • Shaping scalable data models that support a wide range of downstream use cases
  • Working closely with Product, Engineering, and Data Science teams to understand data needs and constraints
  • Mentoring and supporting other data engineers, sharing knowledge and encouraging good engineering practices
  • Contributing to the long term health of the data platform through thoughtful design and continuous improvement
What we are looking for
  • Strong experience using Python and SQL to transform large, real world datasets in production environments
  • A deep understanding of data structures, data quality challenges, and how to design reliable transformation logic
  • Experience working with modern data platforms such as Azure, GCP, AWS, Databricks, Snowflake, or similar
  • Confidence working with imperfect data and making it fit for consumption downstream
  • Experience supporting or mentoring other engineers through code reviews, pairing, or informal guidance
  • Clear, thoughtful communication and a collaborative mindset

You do not need to meet every requirement listed. What matters most is strong, hands on experience using Python and SQL to work confidently with complex, real world data, apply sound engineering judgement, and help others grow through your experience.

Right to work in the UK is required. Sponsorship is not available now or in the future.

Apply to find out more about the role.

For each successful placement, you will be eligible for our general gift or voucher scheme.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.