Job Search and Career Advice Platform

Enable job alerts via email!

Data Engineer

CBSbutler Holdings Limited trading as CBSbutler

Sheffield

On-site

GBP 60,000 - 80,000

Full time

3 days ago
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A recruitment agency is seeking a Data Engineer for a contract role based in Sheffield. The position involves designing and operating telemetry and observability data pipelines within an OpenShift and Kafka ecosystem. Candidates should demonstrate extensive experience with Kafka, OpenShift, and data engineering skills in Python. This is a contract position offering competitive daily rates and requires BPSS clearance. Immediate interviews are available.

Qualifications

  • Strong hands-on experience building streaming data pipelines with Kafka.
  • Experience with OpenShift / Kubernetes telemetry, including OpenTelemetry and Prometheus.
  • Proven capability integrating telemetry into Splunk.

Responsibilities

  • Design, implement and maintain scalable data pipelines to ingest and process OpenShift telemetry.
  • Stream telemetry through Kafka and build resilient consumer services.
  • Engineer multi-tenant observability data models.

Skills

Kafka
OpenShift
Python
ETL/ELT
OpenTelemetry

Tools

Splunk
Job description
Overview

Data Engineer (Contract)


9+ Month Contract based in Sheffield
395 - 442 per day InsideIR35
BPSS clearance required - candidates must be eligible


My client is seeking a Data Engineer to design and operate large-scale telemetry and observability data pipelines within a modern OpenShift and Kafka ecosystem. This role is central to enabling proactive, Level 4 observability, delivering high-quality metrics, logs, and traces to support platform reliability, operational insight, and automation.


Responsibilities


  • Design, implement and maintain scalable data pipelines to ingest and process OpenShift telemetry (metrics, logs, traces)

  • Stream telemetry through Kafka (producers, topics, schemas) and build resilient consumer services for enrichment and transformation

  • Engineer multi-tenant observability data models, ensuring data lineage, quality controls and SLAs across streaming layers

  • Integrate processed telemetry into Splunk for dashboards, analytics, alerting and operational insights

  • Implement schema management and governance using Avro/Protobuf, including versioning and compatibility strategies

  • Build automated validation, replay and backfill mechanisms to ensure data reliability and recovery

  • Instrument services using OpenTelemetry, standardising tracing, metrics and structured logging

  • Apply LLMs to enhance observability, such as query assistance, anomaly summarisation and runbook generation

  • Collaborate with Platform, SRE and Application teams to align telemetry, alerts and SLOs

  • Ensure pipelines meet security, compliance and best-practice standards

  • Produce clear documentation covering data flows, schemas, dashboards and operational runbooks


Skills & Experience


  • Strong hands-on experience building streaming data pipelines with Kafka (producers/consumers, schema registry, Kafka Connect, KSQL/KStreams)

  • Experience with OpenShift / Kubernetes telemetry, including OpenTelemetry and Prometheus

  • Proven capability integrating telemetry into Splunk (HEC, Universal Forwarders, sourcetypes, CIM, dashboards, alerting)

  • Solid data engineering skills in Python (or similar) for ETL/ELT, enrichment and validation


Please apply for immediate interview!


CBSbutler is operating and advertising as an Employment Agency for permanent positions and as an Employment Business for interim / contract / temporary positions. CBSbutler is an Equal Opportunities employer and we encourage applicants from all backgrounds.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.