Enable job alerts via email!

Senior Streaming Platform Engineer (Data)

On

City Of London

Hybrid

GBP 70,000 - 90,000

Full time

Today
Be an early applicant

Job summary

A technology company in the UK is looking for a highly skilled Streaming Platform Engineer to join their Data Streaming Platform team. This hybrid role involves designing, building, and maintaining a high-performance, real-time data streaming platform. The ideal candidate should be experienced in Apache Kafka, Flink, and have a solid understanding of distributed systems. Competitive compensation and opportunities for growth are included.

Qualifications

  • Strong production experience with Apache Kafka and its ecosystem.
  • Experience building and optimizing real-time data pipelines.
  • Hands-on experience with major Cloud Platforms (AWS, GCP, or Azure).
  • Proficiency in at least one programming language among Python, Typescript, Java, Scala or Go.

Responsibilities

  • Design and maintain core infrastructure for the data streaming platform.
  • Implement and optimize data pipelines and stream processing applications.
  • Collaborate with teams to ensure data quality and integration.
  • Develop automation and tooling for platform provisioning and CI/CD.
  • Monitor platform performance and troubleshoot issues.
  • Stay up-to-date with advancements in streaming technologies.

Skills

Apache Kafka
Apache Flink
Spark Streaming
Python
Kubernetes
Docker

Tools

AWS
GCP
Azure
Terraform
GitHub Actions
New Relic
Prometheus
Grafana
Job description
In short

We are seeking a highly skilled and motivated Streaming Platform Engineer to join the Data Streaming Platform team. This is a unique hybrid role that combines the disciplines of platform, software, and data engineering to build, scale, and maintain our high-performance, real-time data streaming platform. The ideal candidate should have a passion for architecting robust, scalable systems to enable data-driven products and services at massive scale.

Your mission
  • Design, build, and maintain the core infrastructure for our real-time data streaming platform, ensuring high availability, reliability, and low latency.
  • Implement and optimize data pipelines and stream processing applications using technologies like Apache Kafka, Apache Flink, and Spark Streaming.
  • Collaborate with software and data engineering teams to define event schemas, ensure data quality, and support the integration of new services into the streaming ecosystem.
  • Develop and maintain automation and tooling for platform provisioning, configuration management and CI/CD pipelines.
  • Champion the development of self-service tools and workflows that empower engineers to manage their own streaming data needs, reducing friction and accelerating development.
  • Monitor platform performance, troubleshoot issues, and implement observability solutions (metrics, logging, tracing) to ensure the platform’s health and stability.
  • Stay up-to-date with the latest advancements in streaming and distributed systems technologies and propose innovative solutions to technical challenges.
Your story

This is a hybrid role, and we understand that candidates may not have experience with every single technology listed. We encourage you to apply if you have a strong foundation in a majority of these areas.

  • Streaming Platforms & Architecture: Strong production experience with Apache Kafka and its ecosystem (e.g., Confluent Cloud, Kafka Streams, Kafka Connect). Solid understanding of distributed systems and event-driven architectures and how they drive modern microservices and data pipelines.
  • Real-Time Data Pipelines: Experience building and optimizing real-time data pipelines for ML, analytics and reporting, leveraging technologies such as Apache Flink, Spark Structured Streaming, and integration with low-latency OLAP systems like Apache Pinot.
  • Platform Infrastructure & Observability: Hands‑on experience with major Cloud Platforms (AWS, GCP, or Azure), Kubernetes and Docker, coupled with proficiency in Infrastructure as Code (Terraform). Experience integrating and managing CI/CD pipelines (GitHub Actions) and implementing comprehensive Observability solutions (New Relic, Prometheus, Grafana) for production environments.
  • Programming Languages: Proficiency in at least one of the following: Python, Typescript, Java, Scala or Go.
  • Data Technologies: Familiarity with data platform concepts, including data lakes and data warehouses.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.