Enable job alerts via email!

Data Analyst – Bi

Prime South Africa

Gauteng

On-site

ZAR 300 000 - 400 000

Full time

Yesterday
Be an early applicant

Job summary

A leading data-driven company in Gauteng is seeking a Junior Data and Analytics Engineer. The role involves building data pipelines, creating Power BI dashboards, and developing machine learning models. Ideal candidates will have a Bachelor's degree and proficiency in Python and SQL. This position offers an opportunity to collaborate with a dynamic team and influence data-driven decision-making.

Qualifications

  • Bachelor's degree in Computer Science, Engineering, Statistics, Mathematics, or equivalent.
  • Proficient with pandas, NumPy, Jupyter / VS Code.
  • Strong skills in SQL query writing and optimization.

Responsibilities

  • Build and maintain data pipelines in Python.
  • Design Power BI dashboards for actionable insights.
  • Develop and validate machine learning models.

Skills

Data analysis
Python proficiency
SQL query optimization
Power BI dashboard creation
Machine learning familiarity
Flask API development

Education

Bachelor's degree in Computer Science or related field

Tools

Databricks
Jupyter
Git
Job description
The Role

We're on the lookout for a dynamic and forward-thinking Junior Data and Analytics Engineer to join our team.

In this role, you'll support our Data Science and Analytics team by building efficient data pipelines, creating lightweight web applications, and delivering insight-driven reports and predictive models. Your work will directly influence decision-making across the business.

Reporting to the Head of Data Science, you'll collaborate with Data Scientists, BI Analysts, Software Engineers, and key Business Stakeholders to make a real impact.

Key Responsibilities
  • Data Engineering: Build and maintain reproducible data pipelines in Python (pandas) to clean, transform, and load data from diverse sources into SQL or Databricks Delta tables. Optimize SQL queries, views, and stored procedures for clarity and performance.
  • Analytics & BI: Design intuitive Power BI dashboards that highlight actionable KPIs for teams across operations, finance, and product. Partner with business teams to translate ad-hoc questions into scalable analytical solutions.
  • Machine Learning: Develop and validate scikit-learn models for regression and classification use cases. Package models as Flask APIs for seamless integration with internal tools and external services.
  • Web & API Development: Build lightweight Flask microservices or data apps (e.g., self-serve prediction endpoints, data quality monitors).
  • Follow best practices for CI / CD and version control (Git) to deploy to staging and production environments.
  • Documentation & Testing: Write clear, concise documentation and unit tests to ensure reproducibility, maintainability, and knowledge-sharing.
  • Collaboration: Work closely with senior data scientists, analysts, and domain experts. Proactively seek feedback and iterate quickly.
Personal Attributes and Skills
  • Analytical mindset: You thrive on solving challenging problems with data.
  • Communication: You can explain technical findings to non-technical stakeholders with ease.
  • Curiosity & initiative: You proactively identify opportunities for automation or uncovering new insights.
  • Team player: You're comfortable working in cross-functional squads and value giving and receiving feedback.
  • Attention to detail: You write clean, well-commented code and reliable tests.
Qualifications & Experience
  • Bachelor's degree in Computer Science, Engineering, Statistics, Mathematics, Biology, or a related field (or equivalent experience).
  • Python: Proficient with pandas, NumPy, and Jupyter / VS Code.
  • SQL: Strong skills in query-writing, optimisation, and data modelling (CTEs, indexing, window functions).
  • Databricks: Hands-on experience with notebooks, job management, and Delta Lake.
  • Power BI: Skilled in building data models, DAX measures, and interactive dashboards.
  • Flask: Experience creating RESTful endpoints, handling request / response cycles, and deploying small web apps.
  • Machine Learning: Familiarity with scikit-learn workflows (train / validation splits, cross-validation, hyper-parameter tuning, model evaluation metrics).
  • Version Control: Proficient with Git / GitHub or GitLab.
  • Bonus Skills: Experience with cloud platforms (Azure, AWS, or GCP) and containerisation (Docker).
  • Knowledge of CI / CD pipelines (GitHub Actions, Azure DevOps).
  • Exposure to data-warehouse / lakehouse technologies—Databricks (advanced), Snowflake, Delta Lake. Basic understanding of MLOps concepts (model monitoring, retraining triggers).
  • Familiarity with Tableau or other BI tools. Spark / PySpark exposure for larger datasets.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.