Enable job alerts via email!

Data Engineer

DIGI Outsource

Cape Town

On-site

ZAR 500,000 - 700,000

Full time

Today
Be an early applicant

Job summary

A leading digital gaming company in Cape Town is seeking a Data Engineer to develop and maintain data pipelines supporting real-time processing. The ideal candidate will have a Bachelor's in Computer Science and 3-4 years of experience. Key responsibilities include optimizing SQL queries and collaborating with various teams. The role offers a supportive environment with numerous career benefits including development programs and employee wellness initiatives.

Benefits

Free daily meals
On-site massages
Gym access
Learning and development programmes

Qualifications

  • 3-4 years of experience in data engineering or related roles.
  • Hands-on experience with data streaming platforms such as Kafka or RabbitMQ.
  • Familiarity with cloud platforms (e.g., AWS, Azure, Google Cloud).

Responsibilities

  • Developing and maintaining data pipelines to support real-time and batch processing.
  • Writing and optimizing SQL queries for data processing.
  • Collaborating with team members to integrate data from various sources.

Skills

SQL skills
Data pipeline development
Data streaming platforms (Kafka, RabbitMQ)
Attention to detail
Problem-solving skills

Education

Bachelor's degree in computer science or related field

Tools

Python
Java
ETL tools
Job description
Overview

Kick-start your career in the online gaming world and experience the very latest in technology and innovation.

Be part of Super Group, the NYSE-listed digital gaming company behind leading Sports and iGaming brands. At DigiOutsource we bring passionate people and innovative tech together to create market-leading online gaming solutions, with multidisciplinary teams focused on products, customer experience and security.

We are looking for passionate, driven individuals to join us on our growth journey. You will find a supportive environment where your skills can flourish and your career can soar.

Ready to become a game-changer? Supercharge your career with us and be part of something extraordinary.

Why we need you

We are on a mission to create extraordinary experiences for our customers, and we believe that your unique skills, passion and drive will help us achieve our vision.

Role focus

As a Data Engineer within our Fintech ecosystem, you will focus on building and maintaining the data pipelines and processes that power our financial analytics, reporting and decision-making. You’ll work with data from diverse systems—from transactional platforms to customer engagement tools—ensuring it is reliable, accessible and optimized for high-impact use. This role supports our transition to modern data practices, including cloud-native architectures and real-time processing, while maintaining continuity and stability of on-premises operations. Your work will enable smarter, faster financial services and help shape the future of data in Payments.

What you’ll be doing
  • Developing and maintaining data pipelines to support real-time and batch processing.
  • Writing and optimizing SQL queries, stored procedures and scripts for data processing.
  • Supporting ETL/ELT workflows for data integration and transformation.
  • Collaborating with team members to integrate data from various sources into centralized systems.
  • Implementing and managing data streaming solutions using platforms like Kafka or RabbitMQ.
  • Ensuring data quality and reliability across all pipelines and processes.
  • Monitoring and troubleshooting data pipelines to ensure performance and reliability.
  • Documenting data workflows and providing support for data-related issues.
Essential skills
  • Bachelors degree in computer science, Data Engineering, Information Systems or a related field.
  • 3-4 years of experience in data engineering or related roles.
  • Strong SQL skills including querying and optimizing database operations.
  • Experience developing data pipelines for real-time and batch processing.
  • Hands-on experience with data streaming platforms such as Kafka or RabbitMQ.
  • Familiarity with ETL processes and tools.
  • Proficiency in a programming language such as Python or Java for data tasks.
  • Knowledge of data modelling basics for relational databases.
  • Attention to detail and a commitment to ensuring data accuracy and reliability.
  • Problem-solving skills and the ability to troubleshoot issues in data systems.
Desirable skills
  • Relevant certifications (e.g. AWS Certified Data Analytics Specialty, Microsoft Certified: Azure Data Engineer or Databricks Certified Data Engineer).
  • Familiarity with cloud platforms (e.g. AWS, Azure, Google Cloud) and cloud-based data services.
  • Exposure to big data technologies such as Spark or Hadoop.
  • Exposure to Apache Spark, Airflow, Kafka.
  • Knowledge of data governance and compliance standards.
  • Experience with data formats like JSON or Parquet.
  • Writing clean, modular code for data processing automation and integration with APIs or cloud services.
  • Basic understanding of containerization tools such as Docker.
  • Interest in learning and adopting emerging data technologies.
Our values

Our values are non-negotiables. Our culture is underpinned by core values and behavioural competencies essential for all employees.

  • Adaptability
  • Ownership and accountability
  • Initiating action
  • Resilience
  • Team orientation
  • Integrity
  • Innovation
What you’ll get back

We offer a great variety of personal and professional benefits to help you thrive at DigiOutsource and Super Group. This includes:

  • Learning and development programmes to expand your skills and advance your career.
  • Performance feedback to help you improve and reach your potential.
  • Employee Assistance Programme and family benefits.
  • Free daily meals, on-site massages, gym access and other benefits.
  • Group life cover, financial services assistance, and wellness benefits.
  • Leadership training, referral bonuses and retirement benefits.
  • Team socials and supportive, inclusive work environment.
Be part of that Superclass feeling

At Super Group diversity is part of our DNA. With teams across 16 countries, 85 nationalities and 27 languages, we champion equal opportunities and an inclusive environment. Growth is supported and contributions are valued.

Game on!

Note: We will apply relevance to our Talent Management and Talent Development Programme as part of recruitment. Shortlisted candidates may need to complete an assessment.

This position requires trust and honesty; it has access to customers’ financial details, so a credit and criminal record check will be conducted. The qualifications identified herein are an inherent job requirement; therefore a qualification verification check will be done. By applying for this role and supplying the necessary details you grant us permission to apply for these checks in a confidential manner and solely for verification purposes.

If you do not hear from us within two weeks, please deem your application as unsuccessful. The perfect place to work, play and grow.

Key skills

Apache Hive, S3, Hadoop, Redshift, Spark, AWS, Apache Pig, NoSQL, Big Data, Data Warehouse, Kafka, Scala

Employment Type: Full-Time

Experience: years

Vacancy: 1

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.