Job Search and Career Advice Platform

Enable job alerts via email!

Senior Specialist: Data Engineer

Vodafone Group

Cape Town

On-site

ZAR 600 000 - 900 000

Full time

Today
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A leading telecommunications company in Cape Town is looking for a Senior Data Engineer to design and maintain scalable data infrastructure. The ideal candidate should have expertise in data processing technologies like Spark and Kafka, along with strong programming skills in Python or Java. This role involves collaboration across teams, ensuring data integrity, and driving innovation in a cloud environment. Competitive benefits and a commitment to diversity and inclusion are part of the company's culture.

Benefits

Enticing incentive programs
Retirement funds and medical aid benefits
Exclusive staff discounts with partner companies

Qualifications

  • 3+ years of experience in software engineering, data engineering, or cloud engineering roles.
  • Experience with Agile methodologies (Kanban, Scrum).
  • Strong analytical and problem-solving abilities with a commitment to continuous learning.

Responsibilities

  • Design, develop, and maintain scalable data pipelines using both batch and real-time technologies.
  • Implement and manage edge applications ensuring data processing efficiency.
  • Lead incident management responses to ensure minimal impact on operations.

Skills

Programming languages (Python, Java, Scala)
Big Data Technologies (Spark, Hive)
Database Management (MongoDB, CassandraDB)
Cloud Computing (AWS, GCP)
Containerization (Docker, Kubernetes)
API Management (Nginx)

Education

3 year IT or IS degree or diploma
Relevant cloud certification (AWS, GCP, Azure)

Tools

OpenShift
Kafka
Flink
Job description

When it comes to putting people first, we're number 1.

The number 1 Top Employer in South Africa.
Certified by the Top Employer Institute 2025.

Role Purpose/Business Unit:

The primary purpose of this, Senior Data Engineer role, is to design, develop, and maintain robust, scalable, and secure data and analytics infrastructure that supports batch and real-time data processing at scale. This includes managing on-premises big data platforms, edge applications, and cloud deployments to ensure seamless integration and optimal performance across all environments. The role is crucial in driving innovation, ensuring data integrity, and delivering actionable insights that empower the organization to make data-driven decisions.

Key Objectives
  • Data Pipeline Excellence: Build and maintain efficient data pipelines that handle large volumes of data with high reliability and performance.
  • Edge and Cloud Integration: Seamlessly integrate edge applications and cloud services to provide real‑time data access and machine learning capabilities.
  • Innovation and Improvement: Continuously seek opportunities to enhance the data infrastructure, adopt new technologies, and improve processes.
  • Collaboration and Leadership: Work closely with cross‑functional teams to understand their data needs and provide technical leadership and guidance.
Your responsibilities will include:
  • Data Pipeline Management: Design, develop, and maintain scalable data pipelines using batch technologies (like Spark, NiFi, and Hive) and real‑time technologies (like Kafka, Flink, Spark streams).
  • Edge Application Development: Implement and manage edge applications using MongoDB and CassandraDB, ensuring efficient data processing and storage.
  • Microservices and Containerization: Develop and deploy microservices in an OpenShift containerized environment, utilizing tools like Nginx API Gateway for real‑time data access.
  • Cloud Deployment and Management: Implement and support similar use cases in AWS, ensuring seamless integration between on‑premise and cloud environments.
  • Performance Monitoring and Optimization: Continuously monitor and optimize the performance of data pipelines, applications, and services.
  • Security and Compliance: Ensure all systems and data processes comply with relevant security standards and regulations.
  • Technology Stack Selection: Recommend/make decisions on the appropriate technologies and tools to use for various components of the data and analytics infrastructure.
  • Architecture Design: Define the architecture for data pipelines, edge applications, and microservices to ensure scalability and reliability.
  • Resource Allocation: Allocate resources effectively to balance performance, cost, and scalability across on‑premise and cloud environments.
  • Data Governance and Compliance: Establish and enforce data governance policies to ensure data quality, security, and compliance.
  • Incident Management: Lead the response to any incidents or outages, ensuring quick resolution and minimal impact on operations.
  • Innovation and Improvement: Continuously seek opportunities to improve processes, adopt new technologies, and drive innovation within the tea
The ideal candidate for this role will have:
  • 3 year IT or IS degree or diploma or related field is essential
  • Relevant AWS, GCP or Azure cloud certification at professional or associate level
  • Data engineering or related software development experience
  • Agile exposure working with Kanban or Scrum
Key Competencies
  • Technical Proficiency: Strong skills in programming languages such as Python, Java, or Scala
  • Big Data Technologies: Expertise in tools like Spark, Hive, parquet, iceberg, etc.
  • Database Management: Proficiency with both relational & NoSQL databases (e.g., MongoDB, CassandraDB)
  • Cloud Computing: Experience with cloud platforms like AWS/GCP, including services for data storage and processing.
  • Containerization & Microservices: Knowledge of containerization technologies (e.g., Docker, Kubernetes) and microservices architecture, particularly in OpenShift, AWS ECS and GCP GKE environments
  • API Management: Experience with API gateways like Nginx and developing APIs for real‑time data access
Knowledge Areas
  • Distributed Systems: In-depth understanding of distributed computing principles and technologies
  • Data Engineering: Knowledge of data pipeline design, ETL processes, and data integration
  • Security and Compliance: Familiarity with data security practices and regulatory compliance requirements
  • Performance Optimization: Techniques for monitoring and optimizing the performance of data systems and applications
  • Edge Computing: Understanding of edge computing concepts and technologies for processing data closer to the source
Experience
  • Hands‑On Experience: Several years of experience in software engineering, data engineering, or cloud engineering roles
  • Project Management: Experience managing complex projects, preferably in a big data or cloud environment.
  • Team Collaboration: Proven ability to work effectively in cross‑functional teams and communicate technical concepts to non‑technical stakeholders
  • Problem‑Solving: Strong analytical and problem‑solving skills, with a track record of addressing complex technical challenges
  • Continuous Learning: Commitment to staying updated with the latest technologies and best practices in the field
We make an impact by offering:
  • Enticing incentive programs and competitive benefit packages
  • Retirement funds, risk benefits, and medical aid benefits
  • Cell phone and data benefits, advantages fibre connection discounts, and exclusive staff discounts offered in collaboration with partner companies

Closing date for Applications: 09 January 2026.

The base location for this role is Century City.

The company's approved Employment Equity Plan and Targets will be considered as part of the recruitment process. As an Equal Opportunities employer, we actively encourage and welcome people with various disabilities to apply.
Vodacom is committed to an organisational culture that recognises, appreciates, and values diversity & inclusion.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.