At MR DIY International, we're more than a global home improvement brand, we're a catalyst for ambitious talent ready to grow beyond borders.
With over 5,000+ stores across 14 countries globally, we offer unmatched international exposure to those looking to build a meaningful, global career. From retail operations and merchandising to strategy, tech, and supply chain. Your work here shapes how millions of customers shop every day.
Job Summary
The Data Engineer is a key technical contributor to the organisation's data infrastructure, reporting directly to the Head of Data. This role focuses on the hands‑on execution of modernising our company’s data ecosystem and upholding engineering excellence. You will ensure the reliability of our data infrastructure and foster a high‑performing culture.
Key Responsibilities
- Support the Head of Data and Principal Data Engineers in executing the long‑term data strategy, contributing to the development of architecture, tooling, and engineering best practices.
- Design and implement highly scalable, secure, and reliable data pipelines, with end‑to‑end observability (monitoring, logging, alerting, data quality checks, and automated error handling).
- Build and maintain a modern data architecture, including orchestration frameworks, transformation layers with dbt, and a medallion data model that supports analytics and operational use cases.
- Uphold engineering excellence by adhering to strict standards in version control, CI/CD pipelines, branching strategies, Infrastructure-as-Code, documentation, and code review practices.
- Implement strong data governance and security practices within the codebase, including access controls, encryption, privacy compliance, metadata management, naming conventions, and lifecycle management.
- Optimise data platform performance and cost by monitoring usage, improving query efficiency, and managing resources across storage and compute workloads.
- Partner with analytics engineers to ensure the delivery of trusted, reliable insights to business stakeholders via governed self‑service.
Job Requirements
- 2 to 4 years of experience in data engineering or similar roles.
- Exposure to the modern data stack (Google Cloud, BigQuery, dbt, Airflow, Looker or similar).
- Experience building pipelines for heterogeneous data sources (ERP, SaaS, APIs, spreadsheets, etc.).
- Experience in troubleshooting and resolving performance bottlenecks across the entire data stack, e.g. query optimization, resource allocation, storage management, etc.
- Understanding of dimensional modelling (Kimball) and data warehouse best practices.
- Familiarity with DevOps for data: version control, CI/CD, infrastructure as code, monitoring, and observability.
- Knowledge of data governance, compliance, and security frameworks.
- Good communication skills, able to bridge technical and business perspectives.