Job Search and Career Advice Platform

Enable job alerts via email!

Data Architect Azure Data Engineering

Fractal

Greater London

On-site

GBP 70,000 - 90,000

Full time

Yesterday
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A strategic AI partner is seeking a Data Architect in London to design and build end-to-end data pipelines using PySpark, Databricks, and Azure Data Platform. This role requires hands-on development of data solutions and ensuring data quality across teams. Ideal candidates will have exposure to data governance, cloud-native architectures, and familiarity with AI/ML workflows. The company values team collaboration and innovative thinking in data engineering.

Qualifications

  • Strong experience in designing and building data pipelines.
  • Hands-on approach to development and debugging.
  • Ability to work within hybrid environments.

Responsibilities

  • Design and build end-to-end data pipelines.
  • Support data governance practices.
  • Act as a technical anchor for the data engineering team.

Skills

PySpark
Databricks
Azure Data Platform
Data Engineering Fundamentals
Data Governance
Cloud-native Data Architectures
AI/ML Data Workflows

Tools

Azure Data Factory
Synapse
Job description

It\'s fun to work in a company where people truly BELIEVE in what they are doing!

We\'re committed to bringing passion and customer focus to the business.

Data Architect - Azure Data Engineering

Fractal is a strategic AI partner to Fortune 500 companies with a vision to power every human decision in the enterprise. Fractal is building a world where individual choices, freedom, and diversity are the greatest assets. An ecosystem where human imagination is at the heart of every decision. Where no possibility is written off, only challenged to get better. We believe that a true Fractalite is the one who empowers imagination with intelligence. Fractal has been featured as a Great Place to Work by The Economic Times in partnership with the Great Place to Work\u00ae Institute and recognized as a ool Vendor\u2019 and a Vendor to Watch\u2019 by Gartner.

Please visit Fractal | Intelligence for Imagination for more information about Fractal

Location: London

Core Technical Responsibilities
  • Design and build end-to-end data pipelines (batch and near real-time) using PySpark, Databricks, and Azure Data Platform (ADF, ADLS, Synapse)
  • Be hands-on in development, debugging, optimization, and production support of data pipelines
  • Work with or extend existing/proprietary ETL frameworks (e.g., Mar's Simpel or similar) and improve performance and reliability
  • Implement data modeling, transformation, and orchestration patterns aligned with best practices
  • Apply data engineering fundamentals including partitioning, indexing, caching, cost optimization, and performance tuning
  • Collaborate with upstream and downstream teams to ensure data quality, reliability, and SLAs
Architecture & Design
  • Contribute to the design of cloud-native data architectures covering ingestion, processing, storage, and consumption
  • Translate business and analytical requirements into practical, scalable data solutions
  • Support data governance practices including metadata, lineage, data quality checks, and access controls
  • Work within hybrid environments (on-prem to cloud) and support modernization initiatives
  • Understand and apply data mesh concepts where relevant (domain ownership, reusable data products, basic contracts)
  • Evaluate tools and frameworks with a build vs. buy mindset, recommending pragmatic solutions
Team & Delivery Responsibilities
  • Act as a technical anchor for a data engineering team
  • Provide technical guidance, code reviews, and mentoring to engineers
  • Own delivery for assigned data products or pipelines — from design through deployment
  • Collaborate with product owners, analysts, and architects to clarify requirements and priorities
Stakeholder & Communication
  • Engage with business and analytics stakeholders to understand data needs and translate them into technical solutions
  • Clearly communicate technical designs and trade-offs to both technical and non-technical audiences
  • Escalate risks and propose mitigation strategies proactively
  • Support documentation of architecture, pipelines, and operational processes
Good to Have
  • Exposure to AI/ML data workflows (feature engineering, model inputs, MLOps basics)
  • Awareness of LLMs / Agentic AI architectures from a data platform perspective
  • Experience with other platforms such as AWS, GCP, Snowflake, BigQuery, Redshift
  • Familiarity with data governance or catalog tools (DataHub, Collibra, dbt, etc.)
  • Experience working in CPG, Retail, Supply Chain, or similar domains
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.