Job Search and Career Advice Platform

Enable job alerts via email!

Data Engineer

QAD

Remote

GBP 60,000 - 80,000

Full time

Yesterday
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A leading enterprise software provider in the UK is seeking a Data Engineer to design and optimize data pipelines in a fully remote environment. The role demands extensive experience in data engineering, particularly with Snowflake and AWS services. The ideal candidate will leverage advanced SQL and Python skills for data manipulation and machine learning integration. Join us to solve real-world problems in manufacturing and supply chain within a dynamic and multidisciplinary team.

Qualifications

  • 5+ years of experience in data engineering in a cloud environment.
  • Expertise in Snowflake for modeling and optimization.
  • Strong knowledge of AWS services like S3 and IAM.

Responsibilities

  • Design and maintain scalable data pipelines.
  • Optimize data in Snowflake.
  • Implement ETL and ELT flows using various sources.

Skills

Data handling
Cloud services
SQL
Python
ETL design

Tools

Snowflake
AWS
Job description

QAD Inc. is a leading provider of adaptive, cloud-based enterprise software and services for global manufacturing companies. Global manufacturers face ever-increasing disruption caused by technology-driven innovation and changing consumer preferences. In order to survive and thrive, manufacturers must be able to innovate and change business models at unprecedented rates of speed. QAD calls these companies Adaptive Manufacturing Enterprises. QAD solutions help customers in the automotive, life sciences, packaging, consumer products, food and beverage, high tech and industrial manufacturing industries rapidly adapt to change and innovate for competitive advantage.

We are looking for talented individuals who want to join us on our mission to help solve relevant real-world problems in manufacturing and the supply chain.

This role is fully remote in UK, with full work authorization already in effect. No Visa sponsorship is available.

Job Description

In a data-driven and AI-oriented environment, you will be responsible for the design, industrialization, and optimization of inter-application data pipelines. You will be involved in the entire data chain, from data ingestion to its use by data science teams and AI systems in production within a human-sized and multidisciplinary team. This role is within Process Intelligence (PI) team that combines functions such as Process Mining, Real Time Monitoring and Predictive AI

Key responsibilities
  • Design and maintain scalable data pipelines.
  • Structure, transform, and optimize data in Snowflake.
  • Implement multi-source ETL / ELT flows (ERP, APIs, files).
  • Leverage the AWS environment, including S3, IAM, and various data services.
  • Prepare data for Data Science teams and integrate AI / ML models into production.
  • Ensure data quality, security, and governance.
  • Provide input on data architecture.
Qualifications
  • 5+ years of experience in data engineering, including significant experience in a cloud environment.
  • Snowflake (MUST HAVE) : Expertise in modeling, query optimization, cost management, and security.
  • AWS : Strong knowledge of data and cloud services including S3, IAM, Glue, and Lambda.
  • Languages : Advanced SQL and Python for data manipulation, automation, and ML integration.
  • Data Engineering : Proven experience in ETL / ELT pipeline design.
  • AI / ML Integration : Ability to prepare data for model training and deploy AI models into production workflows (batch or real-time).
Nice to Have
  • Experience with agentic AI architectures, including agent orchestration and decision loops.
  • Integration of agent-driven AI models into existing data pipelines.
  • Knowledge of modern architectures such as Lakehouse or Data Mesh.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.