Key Responsibilities
- Design, implement, and maintain real‑time data replication and streaming solutions across multi‑cloud and on‑prem environments.
- Act as a subject‑matter expert for IBM IIDR, ensuring high availability, performance tuning, and conflict‑free data synchronization.
- Develop and support Kafka‑based event streaming pipelines integrated with Debezium for CDC.
- Manage Redis clusters (on‑prem and Azure) for caching, messaging, and data persistence use cases.
- Configure, optimize, and monitor Apache Solr for high‑volume indexing and search performance.
- Collaborate with platform and application teams to ensure middleware alignment with architectural and security standards.
- Implement observability and monitoring solutions for proactive detection and troubleshooting.
- Contribute to automation of deployments, configuration management, and system upgrades.
Core Technical Skills
- Strong expertise with IBM InfoSphere Data Replication (IIDR) across heterogeneous databases.
- Experience configuring CDC, tuning latency, and managing large‑scale replication topologies.
- Skilled troubleshooting and monitoring using IIDR logs, MQ integration, and event diagnostics.
- Knowledge of HA / DR configurations and conflict resolution in multi‑node environments.
- Advanced experience with Apache Kafka clusters, brokers, topics, partitions, and schema registry.
- Design and implement high‑throughput streaming pipelines with partitioning strategies and delivery guarantees.
- Integration with Debezium connectors, Kafka Connect, and Kafka Streams.
- Strong knowledge of Kafka security, replication, and performance tuning.
- Deep knowledge of Redis architecture, persistence mechanisms (RDB, AOF), clustering, and sharding.
- Managing Redis Enterprise / Azure Cache for Redis.
- Performance optimization, memory management, failover configuration, and integration as cache/message broker/ session store.
- Administration and tuning of SolrCloud clusters including sharding, replication, and indexing performance.
- Schema design, query optimization, and integration with ingestion pipelines and search‑driven analytics.
- Hands‑on experience with Debezium connectors on Azure, integration with Kafka Connect, stream processing, and troubleshooting event streaming, schema evolution, and offset management.
Complementary Skills
- Strong understanding of data integration architectures, event‑driven systems, and real‑time analytics.
- Familiarity with Azure ecosystem (AKS, Event Hubs, Data Factory, Synapse).
- Knowledge of containerization (Docker / Kubernetes) and CI / CD automation.
- Experience in system observability (Prometheus, Grafana, ELK, OpenTelemetry).
Soft Skills
- Strong problem‑solving and analytical mindset.
- Excellent communication and collaboration within multi‑vendor or hybrid teams.
- Ability to document complex architectures and mentor less experienced engineers.
- Proactive approach to performance tuning, automation, and incident prevention.
Desired Profile
- 7+ years of experience in middleware or integration engineering roles.
- Proven record of critical system support and large‑scale data streaming projects.
- Experience working within enterprise environments (finance, telecom, manufacturing, or equivalent).
- English proficiency and ability to operate in distributed global teams.
Diversity & Inclusion
Kyndryl welcomes people of all cultures, backgrounds, and experiences and encourages inclusion in its workplace.
What You Can Expect
State‑of‑the‑art resources, career development, benefits, and learning programs support your growth and wellbeing.
How to Apply
Submit your CV/Resume and a recruiter will reach out with fitting opportunities. You can also sign up for job alerts.