Enable job alerts via email!

Senior Data Streaming Platform Engineer

On

City Of London

Hybrid

GBP 60,000 - 80,000

Full time

Yesterday
Be an early applicant

Job summary

A leading technology company in the UK is looking for a highly skilled Streaming Platform Engineer to design and maintain their data streaming platform. This hybrid role requires a strong foundation in Apache Kafka, real-time data pipelines, and cloud infrastructure. Ideal candidates will have experience with technologies like Apache Flink and AWS. The position offers opportunities to work on innovative data-driven projects while ensuring high availability and reliability.

Qualifications

  • Strong production experience with Apache Kafka and its ecosystem.
  • Experience with real-time data pipelines for ML, analytics, and reporting.
  • Proficiency in Infrastructure as Code (Terraform) and managing CI/CD pipelines.

Responsibilities

  • Design and maintain the core infrastructure for our real-time data streaming platform.
  • Implement and optimize data pipelines using technologies like Apache Kafka and Flink.
  • Monitor platform performance and troubleshoot issues.

Skills

Apache Kafka
Real-time data pipelines
Distributed systems
Python
Cloud Platforms (AWS, GCP, or Azure)
Kubernetes
Docker
Observability (New Relic, Prometheus, Grafana)

Tools

Terraform
Spark Structured Streaming
Apache Flink
GitHub Actions
Apache Pinot
Job description

In short We are seeking a highly skilled and motivated Streaming Platform Engineer to join the Data Streaming Platform team. This is a unique hybrid role that combines the disciplines of platform, software, and data engineering to build, scale, and maintain our high-performance, real-time data streaming platform. The ideal candidate should have a passion for architecting robust, scalable systems to enable data-driven products and services at massive scale.

Your mission
  • Design, build, and maintain the core infrastructure for our real-time data streaming platform, ensuring high availability, reliability, and low latency.
  • Implement and optimize data pipelines and stream processing applications using technologies like Apache Kafka, Apache Flink, and Spark Streaming.
  • Collaborate with software and data engineering teams to define event schemas, ensure data quality, and support the integration of new services into the streaming ecosystem.
  • Develop and maintain automation and tooling for platform provisioning, configuration management and CI/CD pipelines.
  • Champion the development of self-service tools and workflows that empower engineers to manage their own streaming data needs, reducing friction and accelerating development.
  • Monitor platform performance, troubleshoot issues, and implement observability solutions (metrics, logging, tracing) to ensure the platform's health and stability.
  • Stay up-to-date with the latest advancements in streaming and distributed systems technologies and propose innovative solutions to technical challenges.
Your story

This is a hybrid role, and we understand that candidates may not have experience with every single technology listed. We encourage you to apply if you have a strong foundation in a majority of these areas.

  • Streaming Platforms & Architecture: Strong production experience with Apache Kafka and its ecosystem (e.g., Confluent Cloud, Kafka Streams, Kafka Connect). Solid understanding of distributed systems and event-driven architectures and how they drive modern microservices and data pipelines.
  • Real-Time Data Pipelines: Experience building and optimizing real-time data pipelines for ML, analytics and reporting, leveraging technologies such as Apache Flink, Spark Structured Streaming, and integration with low-latency OLAP systems like Apache Pinot.
  • Platform Infrastructure & Observability: Hands-on experience with major Cloud Platforms (AWS, GCP, or Azure), Kubernetes and Docker, coupled with proficiency in Infrastructure as Code (Terraform). Experience integrating and managing CI/CD pipelines (GitHub Actions) and implementing comprehensive Observability solutions (New Relic, Prometheus, Grafana) for production environments.
  • Programming Languages: Proficiency in at least one of the following: Python, Typescript, Java, Scala or Go.
  • Data Technologies: Familiarity with data platform concepts, including data lakes and data warehouses.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.