A global consulting firm is seeking a Confluent Consulting Engineer. This remote role requires designing and maintaining scalable data pipelines using Kafka and Confluent components. The ideal candidate has 5+ years of Kafka experience, proficiency in Java, Python, or Scala, and strong cloud deployment skills. Excellent problem-solving and communication abilities are a must.
Qualifikationen
5+ years of hands-on experience with Apache Kafka.
Strong proficiency in Java, Python, or Scala.
Solid understanding of event-driven architecture.
Experience deploying Kafka on cloud platforms.
Excellent problem-solving and communication abilities.
Aufgaben
Design, develop, and maintain real-time data pipelines.
Collaborate with teams to deliver robust streaming solutions.
As a Confluent Consulting Engineer, you will be responsible for designing, developing, and maintaining scalable real‑time data pipelines and integrations using Kafka and Confluent components. You will collaborate with data engineers, architects, and DevOps teams to deliver robust streaming solutions.
Must have skills
5+ years of hands‑on experience with Apache Kafka (any distribution: open‑source, Confluent, Cloudera, AWS MSK, etc.)
Strong proficiency in Java, Python, or Scala
Solid understanding of event‑driven architecture and data streaming patterns
Experience deploying Kafka on cloud platforms such as AWS, GCP, or Azure
Familiarity with Docker, Kubernetes, and CI / CD pipelines
Excellent problem‑solving and communication abilities
German + English
Preferred experience
Experience with Kafka Connect, Kafka Streams, KSQL, Schema Registry, REST Proxy, Confluent Control Center
Hands‑on with Confluent Cloud services, including ksqlDB Cloud and Apache Flink
Familiarity with Stream Governance, Data Lineage, Stream Catalog, Audit Logs, RBAC
Confluent certifications (Developer, Administrator, or Flink Developer)
Experience with Confluent Platform, Confluent Cloud managed services, multi‑cloud deployments, and Confluent for Kubernetes
Knowledge of data mesh architectures, KRaft migration, and modern event streaming patterns
Exposure to monitoring tools (Prometheus, Grafana, Splunk)
Experience with data lakes, data warehouses, or big data ecosystem
* Der Gehaltsbenchmark wird auf Basis der Zielgehälter bei führenden Unternehmen in der jeweiligen Branche ermittelt und dient Premium-Nutzer:innen als Richtlinie zur Bewertung offener Positionen und als Orientierungshilfe bei Gehaltsverhandlungen. Der Gehaltsbenchmark wird nicht direkt vom Unternehmen angegeben. Er kann deutlich über bzw. unter diesem Wert liegen.