Job title: Solution Architect
Location: Leeds, UK (Hybrid- 3 days/week)
Type: Contract
Client: Wipro
Job description
As part of the CTO Data Ingestion Service, the incumbent will be required to:
- Designing and architecting scalable, real-time systems in Kafka, with a focus on on-premise Cloudera open source Kafka and disaster recovery aspects.
- Configuring, deploying, and maintaining Kafka clusters to ensure high availability, resiliency, and scalability, including understanding and explaining features like KRAFT.
- Integrating Kafka with other data processing tools and platforms such as Kafka Streams, Kafka Connect, Spark Streaming, Schema Registry, Flink, and Beam.
- Collaborating with cross-functional teams to understand data requirements and design solutions that meet business needs.
- Implementing security measures to protect Kafka clusters and data streams.
- Monitoring Kafka performance, troubleshooting issues, and enhancing disaster recovery strategies.
- Providing technical guidance and support to development operations teams.
- Staying updated with the latest Kafka features, industry practices, and real-time technologies like Spark.
Mandatory keywords:
- Experience with on-premise/cloudera open source Kafka
- Focus on disaster recovery aspects
- Knowledge of Kafka resiliency and new features like KRAFT
- Experience with real-time technologies such as Spark
Required Skills & Experience
- Extensive experience with Apache Kafka and real-time architecture including event-driven frameworks.
- Strong knowledge of Kafka Streams, Kafka Connect, Spark Streaming, Schema Registry, Flink, and Beam.
- Experience with cloud platforms such as GCP Pub/Sub.
- Excellent problem-solving skills.
Knowledge & Experience / Qualifications
- Knowledge of Kafka data pipelines and messaging solutions to support critical business operations and enable real-time data processing.
- Monitoring Kafka performance, enhancing decision making and operational efficiency.
- Collaborating with development teams to integrate Kafka applications and services.
- Maintaining an architectural library for Kafka deployment models and patterns.
- Helping developers to maintain Kafka connectors such as JDBC, MongoDB, and S3 connectors, along with topic schemas, to streamline data ingestion from databases, NoSQL data stores, and cloud storage, enabling faster data insights.