Enable job alerts via email!

Kafka Developer

Appit Software Solutions

United States

Remote

USD 150,000 - 200,000

Full time

Today
Be an early applicant

Job summary

A leading software solutions provider in the United States is looking for an experienced Kafka Developer. The role involves developing and maintaining Kafka applications, working with Hadoop, and utilizing cloud technologies. Candidates must have over 25 years of hands-on experience with Big Data solutions and possess a Bachelor's degree in a relevant field. This position offers a unique opportunity to contribute to advanced analytics and insights solutions.

Qualifications

  • Bachelor's degree in computer science, Data Engineering, Information Systems or a related field.
  • 25 years of hands-on experience developing and supporting Big Data solutions with Apache Kafka and Hadoop ecosystems.

Responsibilities

  • Develop and maintain Kafka producers, consumers, and stream-processing applications.
  • Work with HDFS to ingest and store large volumes of data.
  • Implement real-time transforms and stateful operations with Kafka Streams.
  • Integrate Kafka with other systems and orchestrate workflows.
  • Tune Kafka brokers and cluster settings for performance.
  • Deploy and manage Kafka clusters on cloud platforms.
  • Apply security measures within Kafka and secure HDFS permissions.
  • Collaborate with stakeholders to translate requirements into solutions.
  • Stay current on emerging Big Data and streaming technologies.
Job description

Kafka Developer Experienced Associate (36 Years)

Our Analytics & Insights Managed Services team brings a unique combination of industry expertise, technology, data management, and managed services experience to create sustained outcomes for our clients and improve business performance. We empower companies to transform their approach to analytics and insights while building your skills in exciting new directions. Have a voice at our table to help design, build, and operate the next generation of Big Data streaming and batch services leveraging Kafka, Hadoop, and HDFS.

Overview

Job Requirements and Preferences

Responsibilities
  • Kafka Streaming & Messaging – Develop and maintain Kafka producers, consumers, and stream-processing applications using Java, Scala, or Python; design topic layouts, partitioning schemes, and data retention policies to support high-throughput, low-latency use cases.
  • Hadoop Ecosystem & HDFS – Work with HDFS to ingest and store large volumes of structured and unstructured data; build MapReduce or Spark jobs to process historical datasets in batch.
  • Stream Processing Frameworks – Implement real-time transforms and stateful operations with Kafka Streams, Apache Flink, or Spark Structured Streaming; handle exactly-once semantics, windowing, and watermarks in streaming pipelines.
  • Data Integration & Orchestration – Integrate Kafka with other systems (JDBC sources, REST APIs, NoSQL stores) and Hadoop components via Sqoop, NiFi, or custom connectors; orchestrate complex workflows using Apache Airflow, Oozie, or NiFi.
  • Performance Tuning & Reliability – Tune Kafka brokers, consumer groups, and Hadoop cluster settings for scalability and resilience; implement monitoring and alerting (Prometheus, Grafana, Confluent Control Center) to maintain SLAs.
  • Cloud & Hybrid Deployments – Deploy and manage Kafka clusters and Hadoop services on AWS, Azure, or GCP (MSK, HDInsight, Dataproc); use Infrastructure-as-Code (Terraform, CloudFormation) and containerization (Docker, Kubernetes) for repeatable environments.
  • Security & Governance – Apply encryption (TLS), authentication (SASL), and ACLs within Kafka and secure HDFS permissions; collaborate on data cataloging, lineage, and compliance standards.
  • Collaboration & Communication – Partner with data scientists, BI teams, and stakeholders to translate requirements into scalable streaming and batch solutions; document architecture diagrams, runbooks, and best-practice guides.
  • Continuous Learning & Innovation – Stay current on emerging Big Data and streaming technologies (Kafka Connect, ksqlDB, Pulsar); share knowledge through code reviews, brown-bag sessions, and contributions to internal accelerators.
Qualifications
  • Basic Qualifications – Minimum Degree Required: Bachelor's degree in computer science, Data Engineering, Information Systems, or a related technical field.
  • – Minimum Years of Experience: 25 years of hands-on experience developing and supporting Big Data solutions with Apache Kafka and Hadoop ecosystems, Kafka (core brokers, Streams API), Connectors via Informatica, Qlik Replicate, ADF, Python & Pyspark, SQL, Data Modeling.
  • Preferred Qualifications
  • – Degree Preferred: Master\'s degree in data science, Analytics, Computer Science, or related discipline.
  • – Preferred Fields of Study: Data Processing/Analytics, Management Information Systems, Software Engineering.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.