Job Search and Career Advice Platform

Enable job alerts via email!

Data Engineer - AI Practice Team

American Bureau of Shipping

Warrington

On-site

GBP 60,000 - 80,000

Full time

Yesterday
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A global engineering firm is seeking a talented Data Engineer to join their team in Warrington, UK. This full-time role involves designing data foundations for AI applications, building ETL/ELT pipelines, and ensuring data quality and governance. The ideal candidate will have over 6 years of experience in data engineering, with a strong command of SQL and Python. Proficiency in cloud services and modern data processing frameworks is essential. The position offers immense opportunities to influence data strategies for high-impact projects.

Qualifications

  • 6+ years of professional data engineering experience.
  • Proven experience with analytics platforms and AI/ML workloads.
  • Hands-on experience on cloud providers like AWS, Azure, or Google Cloud.

Responsibilities

  • Design, build, and maintain scalable ETL/ELT pipelines.
  • Integrate knowledge repositories into centralized data platforms.
  • Collaborate with AI/ML engineers for model-ready datasets.

Skills

SQL
Python
Data processing frameworks
ETL/ELT orchestration tools
Unstructured data handling
Data modeling
Cloud-native data services
Software engineering best practices

Education

Bachelor’s degree in Computer Science
Master’s degree preferred

Tools

Spark
Databricks
Snowflake
BigQuery
Airflow
AWS Glue
Job description

ABS is seeking an exceptional Data Engineer to join us full-time on our Artificial Intelligence (AI) Practice Team. In this role, you will design and operate the data foundations that power AI chat assistants, custom AI models, and AI-driven process optimization for ABS Consulting clients. You will build robust pipelines that integrate structured and unstructured data, standardize and tag enterprise content, and enable scalable, low-latency retrieval for AI workloads. Working closely with AI engineers, consultants, and domain experts, you will turn messy real-world data into production-grade data assets that deliver measurable business impact.

What You Will Do
  • Design, build, and maintain scalable ETL/ELT pipelines to ingest, clean, and transform structured and unstructured data for AI assistants and custom models.
  • Integrate diverse knowledge repositories (documents, policies, procedures, standards, databases) into centralized data platforms that support retrieval-augmented generation (RAG) and search.
  • Implement data standardization, normalization, and tagging pipelines to align content with enterprise taxonomies and ontologies.
  • Collaborate with AI/ML engineers to productionize model‑ready datasets, feature stores, and embeddings for prediction, classification, and optimization use cases.
  • Optimize data workflows for reliability, cost, and performance across batch and streaming workloads, including monitoring, alerting, and capacity planning.
  • Establish and enforce data quality, lineage, and governance practices to ensure trustworthy inputs to AI systems and process‑automation solutions.
  • Automate and templatize common data engineering patterns to accelerate delivery across multiple client engagements and industry domains.
  • Partner with consultants and business stakeholders to translate process optimization and analytics requirements into robust, maintainable data solutions.
What You Will Need
Education and Experience
  • Bachelor’s degree in Computer Science, Data Engineering, Information Systems, or a closely related technical field; Master’s degree preferred.
  • 6+ years of professional data engineering experience designing, building, and operating production data solutions.
  • Demonstrated experience working in data‑intensive environments (e.g., analytics platforms, AI/ML workloads, large‑scale content repositories, or enterprise data platforms).
  • Hands‑on experience delivering solutions on at least one major cloud provider (AWS, Azure, or Google Cloud), including managed data and analytics services.
Knowledge, Skills, and Abilities
  • Strong command of SQL and at least one programming language commonly used in data engineering (Python preferred) for building production‑grade data pipelines.
  • Hands‑on experience with modern data processing frameworks and platforms (e.g., Spark, Databricks, Snowflake, BigQuery, Synapse, or similar).
  • Proficiency with ETL/ELT orchestration tools and workflows (e.g., Airflow, dbt, Azure Data Factory, AWS Glue, or equivalent).
  • Experience designing and operating data lakes/lakehouses and integrating multiple data sources (relational, NoSQL, files, APIs) into cohesive data models.
  • Deep experience working with unstructured and semi‑structured data (documents, PDFs, JSON, logs), including content extraction, normalization, and metadata/tagging.
  • Familiarity with AI/ML data patterns, including feature engineering, embeddings, vector databases, and retrieval‑augmented generation (RAG) pipelines.
  • Strong understanding of data modeling, data quality, data governance, and lineage practices for regulated or compliance‑sensitive environments.
  • Proficiency with cloud‑native data services (e.g., S3/ADLS/GCS, managed warehouses, streaming services like Kafka/Kinesis/Event Hubs).
  • Solid grounding in software engineering best practices (version control, CI/CD, testing, code review) as applied to data engineering.
  • Must hold a valid right to work status in the UK.
Reporting Relationships

This role reports to a project manager and does not initially include direct reports.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.