Enable job alerts via email!

Senior Data Scientist

BETSoftware

Cape Town

On-site

ZAR 700,000 - 900,000

Full time

Today
Be an early applicant

Job summary

A leading software company in Cape Town is seeking a Senior Data Scientist. This role involves designing efficient data pipelines, applying machine learning techniques, and working with big data technologies. The ideal candidate should have at least 5 years of technical experience and proficiency in programming languages like Python. Join a dynamic team focused on innovation and data-driven solutions.

Qualifications

  • At least 5 years in a technical role with experience in data warehousing.
  • 3-5 years of experience in data science workflow.
  • Proficiency in programming languages such as Python, Java, or Scala.

Responsibilities

  • Design and manage high-throughput, low-latency data pipelines.
  • Apply statistical and machine learning techniques to analyse data.
  • Build and implement advanced statistical and machine learning models.

Skills

Data Science
Business Intelligence Engineer
Development
Data Engineering

Education

5+ years in a technical role
Experience with big data technologies
Proficiency in programming languages like Python

Tools

Hadoop
Spark
PostgreSQL
Job description
Overview

Job title: Senior Data Scientist

Job Location: Western Cape, Cape Town

Deadline: October 02, 2025

Quick Recommended Links

  • Jobs by Location
  • Job by industries
Skill Set
  • Data Science
  • Business Intelligence Engineer
  • Development
  • Data Engineering
Responsibilities

Job Responsibilities :

Data Engineering

  • Design and manage high-throughput, low-latency data pipelines using distributed computing frameworks.
  • Build scalable ETL / ELT workflows using tools like Airflow and Spark.
  • Work with containerised environments (e.g., Kubernetes, OpenShift) and real-time data platforms (e.g., Apache Kafka, Flink).
  • Ensure efficient data ingestion, transformation, and integration from multiple sources.
  • Maintain data integrity, reliability, and governance across systems.

Data Analysis and Modelling :

  • Apply statistical and machine learning techniques to analyse data and translate complex data sets to identify patterns, trends and actionable insights that drive business strategy and operational efficiency.
  • Develop predictive models, recommendation systems, and optimisation algorithms to solve business challenges and enhance operational efficiency.
  • Transform raw data into meaningful features that improve model performance and translate business challenges into analytical problems providing data driven solutions.

Machine Learning and AI Development :

  • Build and implement advanced statistical and machine learning models to solve complex problems.
  • Identify data quality issues and work with data engineers to solve them.
  • Stay up to date with the latest advancements in AI / ML and implement best practices.
  • Develop, implement, and maintain scalable machine learning models for various applications.

Design and Planning Data Engineering Solutions

  • Design and implement testing frameworks to measure the impact of business interventions.
  • Design and implement scalable, high-performance big data applications that support analytical and operational workloads.
  • Lead evaluations and recommend best-fit technologies for real-time and batch data processing.
  • Ensure that data solutions are optimised for performance, security, and scalability.
  • Develop and maintain data models, schemas, and architecture blueprints for relational and big data environments.
  • Ensure seamless data integration from multiple sources, leveraging Kafka for real-time streaming and event-driven architecture.
  • Facilitate system design and review, ensuring compatibility with existing and future systems.
  • Optimise data workflows, ETL / ELT pipelines, and distributed storage strategies.

Technical Development and Innovation :

  • Keep abreast of technological advancements in data science, data engineering, machine learning and AI.
  • Continuously evaluate and experiment with new tools, libraries, and platforms to ensure that the team is using the most effective technologies.
  • Lead end-to-end data science and data engineering projects that support strategic goals. This includes requirements gathering, technical deliverable planning, output quality and stakeholder management.
  • Continuous research on to develop and implement innovative ideas and improved methods, systems and work processes which lead to higher quality and better results.
  • Build and maintain Kafka-based streaming applications for real-time data ingestion, processing, and analytics.
  • Design and implementation data lake and data warehouse data processing & ingestion applications.
  • Utilise advanced SQLSpark query optimisation techniques, indexing strategies, partitioning, and materialised views to enhance performance.
  • Work extensively with relational databases (PostgreSQL, MySQL, SQL Server) and big data technologies (Hadoop, Spark).
  • Design and implement data architectures that efficiently handle structured and unstructured data at scale.

Resourceful and Improving :

  • Find innovative ways following processes to overcome challenges, leveraging available tools, data, and methodologies effectively.
  • Continuously seek out new techniques, best practices and emerging trends in Data Science, AI, and machine learning.
  • Actively contribute to team learning by sharing insights, tools and approaches that improve overall performance.

Qualifications

Job Specification :

  • At least 5 years in a technical role with experience in data warehousing, and data engineering.
  • 3-5 years’ experience across the data science workflow will be advantageous
  • 3-5years of proven experience as a data scientist, with expertise in machine learning, statistical analysis and data visualisation will be advantageous.
  • Proficiency in programming languages such as Python, Java, or Scala for data processing.
  • Experience with big data technologies such as Hadoop, Spark, Hive, and Airflow, PostgreSQL, MySQL, SQL server
  • Expertise in SQL / Spark performance tuning, database optimisation, and complex query development.
  • Advantageous on .net Programming (C#, C++, Java) and Design Patterns.

Living the Spirit

  • Adaptability & Resilience : Embrace change with flexibility, positivity, and a proactive mindset. Thrive in dynamic, fast-paced environments by adjusting to evolving priorities and technologies.
  • Decision-Making & Accountability : Make timely, data-informed decisions involving the team to ensure transparency and alignment. Confidently justify choices based on thorough analysis and sound judgment.
  • Innovation & Continuous Learning : Actively pursue new tools, techniques, and best practices in Data Science, AI, and engineering. Share insights openly to foster team growth and continuously improve performance.
  • Collaboration & Inclusion : Foster open communication and create a supportive, inclusive environment where diverse perspectives are valued. Empower team members to share ideas, seek help, and give constructive feedback freely.
  • Leadership & Growth : Lead authentically with integrity and openness. Support team members through mentorship, skill development, and creating a safe space for honest feedback and innovation. Celebrate successes and embrace challenges as growth opportunities.

Apply Before 10 / 02 / 2025

  • Research / Data Analysis jobs
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.