Job Search and Career Advice Platform

Enable job alerts via email!

Data Engineer DV Cleared

Datatech Analytics

Greater London

Hybrid

GBP 50,000 - 70,000

Full time

Today
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A leading UK consulting and technology firm seeks a Data Engineer to design and deploy modern data pipelines for defense and security programs. Candidates must have active DV clearance. Responsibilities include building robust data pipelines and collaborating with teams to drive outcomes. The role offers a hybrid working pattern, where team presence is required at least two days a week at client sites or the office.

Qualifications

  • Active DV clearance is essential.
  • Experience in production-grade data pipelines.
  • Knowledge of big data technologies and agile practices.

Responsibilities

  • Design and deploy production-grade data pipelines.
  • Build and operate robust pipelines across ingestion, processing, and consumption.
  • Collaborate with stakeholders and delivery teams.

Skills

Production pipeline design and deployment experience
Strong engineering capability with Python
SQL
Big data tooling (e.g., Spark)
APIs

Tools

AWS
Azure
GCP
Job description
Data Engineer Opportunity, DV-cleared only

London Manchester Bristol

A progressive, leading-edge UK consulting and technology organisation is hiring Data Engineers to deliver mission‑critical work across defence and security programmes, building modern data platforms and production‑grade pipelines that enable better decisions at pace. Active DV clearance is essential, we are seeking DV cleared candidates only.

The role

You’ll design and deploy production‑grade data pipelines, from ingestion through to consumption, within a modern big data architecture. Work is delivery focused and delivered using agile engineering practices.

Typical responsibilities
  • Build and operate robust pipelines across ingestion, processing, and consumption
  • Use scripting, APIs, and SQL to extract, transform, and curate data
  • Process large structured and unstructured datasets, integrating multiple sources
  • Collaborate with stakeholders and delivery teams to drive outcomes
Core skills (indicative)
  • Production pipeline design and deployment experience
  • Strong engineering capability with Python, SQL, plus big data tooling (e.g., Spark, and Java/Scala where relevant)
  • AWS, Azure, GCP
Working pattern

Hybrid working, with the team on client site or in the office a minimum of two days per week. Actual time and location will vary by role or assignment.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.