Job Search and Career Advice Platform

Enable job alerts via email!

Junior Data Engineer (Data Science)

Havas Group

Leeds

On-site

GBP 30,000 - 40,000

Full time

2 days ago
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A leading digital marketing agency in the UK is seeking a Junior Data Engineer to assist in building and maintaining data pipelines, optimizing SQL queries, and supporting the deployment of data solutions. This role requires solid Python and SQL skills, and familiarity with Docker tools. You will be part of a collaborative team focused on delivering impactful data solutions for clients. This permanent position fosters a diverse, inclusive workplace committed to equal opportunities.

Qualifications

  • Solid foundational Python skills for scripting and APIs.
  • Working knowledge of SQL for data manipulation.
  • Practical experience with Docker and Git workflows.

Responsibilities

  • Assist in building data pipelines from external APIs.
  • Develop SQL queries for data aggregation and reporting.
  • Support deployment of containerized solutions using Docker.

Skills

Python
SQL
Docker
Git workflows
CI/CD principles
Communication skills
Job description
Agency

Havas Market

Job Description

Junior Data Engineer (Data Science)

Reporting To: Head of Data Science

Hiring Manager: Head of Data Science

Office Location: BlokHaus West Park, Ring Rd, Leeds LS16 6QG

About Us – Havas Media Network

Havas Media Network (HMN) employs over 900 people in the UK & Ireland. We are passionate about helping our clients create more Meaningful Brands through the creation and delivery of more valuable experiences. Our Havas mission: To make a meaningful difference to the brands, the businesses and the lives of the people we work with.

HMN UK spans London, Leeds, Manchester & Edinburgh, servicing our clients brilliantly through our agencies including Ledger Bennett, Havas Market, Havas Media, Arena Media, DMPG and Havas Play Network.

This role will be part of Havas Market, our performance-focused digital marketing agency.

Our values shape the way we work and define what we expect from our people:

  • Human at Heart: You will respect, empower, and support others, fostering an inclusive workplace and creating meaningful experiences.
  • Head for Rigour: You will take pride in delivering high-quality, outcome-focused work and continually strive for improvement.
  • Mind for Flair: You will embrace diversity and bold thinking to innovate and craft brilliant, unique solutions.

These behaviours are integral to our culture and essential for delivering impactful work for our clients and colleagues.

The Role

In this position, you'll play a vital role in delivering a wide variety of projects for our clients and internal teams. You’ll be working as part of a team to create solutions to a range of problems – from bringing data together from multiple sources into centralised datasets, to building predictive models to drive optimisation of our clients’ digital marketing.

We are a small, highly collaborative team, and we value cloud-agnostic technical fundamentals and self-sufficiency above specific platform expertise. The following requirements reflect the skills needed to contribute immediately and integrate smoothly with our existing workflow.

Key Responsibilities
  • Assist with building and maintaining data pipelines that ingest data from external APIs into cloud data warehouses, developing custom integrations where pre-built connectors don’t exist.
  • Develop and optimise SQL queries and data transformations in BigQuery and AWS to aggregate, join, blend, clean and de-dupe data for modelling, reporting and analysis.
  • Work with senior members of the team to design and implement data models on the datasets created above for in-depth analysis and segmentation.
  • Support the deployment of containerized data solutions using Docker and Cloud Run, ensuring pipelines run reliably with appropriate error handling and monitoring.
  • Assist with configuring and maintaining CI/CD pipelines in Azure DevOps to automate testing, deployment, and infrastructure provisioning for data and ML projects.
  • Create clear technical documentation including architecture diagrams, data dictionaries, and implementation guides to enable team knowledge sharing and project handovers.
  • Participate actively in code reviews, providing constructive feedback on SQL queries, Python code, and infrastructure configurations to maintain team code quality standards.
  • Support Analytics and Business Intelligence teams by creating reusable data assets, troubleshooting data quality issues, and building datasets that enable self-service reporting.
Additional Responsibilities
  • Support implementation of statistical techniques such as time series forecasting, propensity modelling, or multi-touch attribution to build predictive models for client campaign optimisation.
  • Assist with the implementation of machine learning models into production environments with MLOps best practices including versioning, monitoring, and automated retraining workflows.
  • Participate in scoping sessions to translate client briefs and business stakeholder requirements into detailed technical specifications, delivery plans, and accurate time estimates.
Core Skills and Experience We Are Looking For
  • Solid foundational Python skills — comfortable writing scripts, working with APIs, and structuring readable code.
  • Working knowledge of SQL — able to write queries involving joins, aggregations, and filtering.
  • Practical experience with Docker — able to build and run containers locally.
  • Familiar with Git workflows — comfortable with branching, committing, and raising pull requests.
  • Understanding of CI/CD principles — aware of how automated testing and deployment pipelines work.
  • Ability to read and follow technical documentation, with a willingness to ask questions and learn how business requirements translate into technical solutions.
  • Excellent written and verbal communication skills for proactive knowledge sharing, constructive PR feedback, participating in daily standups, and documenting processes.
Beneficial skills and experience to have
  • Hands-on experience with any major cloud ML platform, focusing on MLOps workflow patterns.
  • Practical experience with stream or batch processing tools like GCP Dataflow or general orchestrators like Apache Beam.
  • Familiarity with Python ML frameworks or data modelling tools like Dataform/DBT.
  • Familiarity with the structure and core offerings of GCP or AWS.
Contract Type

Permanent

Here at Havas across the group we pride ourselves on being committed to offering equal opportunities to all potential employees and have zero tolerance for discrimination. We are an equal opportunity employer and welcome applicants irrespective of age, sex, race, ethnicity, disability and other factors that have no bearing on an individual’s ability to perform their job.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.