
Enable job alerts via email!
Generate a tailored resume in minutes
Land an interview and earn more. Learn more
An established data analytics firm in South Africa seeks a BI & Reporting Analyst to transform data into actionable insights. Candidates should possess strong SQL skills, Power BI expertise, and be proficient in Python. Responsibilities include designing dashboards, developing data models, and automating reporting processes. The ideal candidate will collaborate with cross-functional teams to enhance decision-making through sophisticated data visualisation and clear documentation.
TheBI & Reporting Analystwill be part of the Data Science & Analytics team, supporting departments across the business by transforming raw data into meaningful insights.
You will play a hands‑on role in building SQL data models, designing user‑friendly dashboards in Power BI, and automating data workflows to ensure timely, accurate, and actionable reporting.
This position is based at our Bryanston office and reports to the Head of Data Science & Analytics.
A Bachelor's Degree in Mathematics, Statistics, Computer Science, Engineering, Biology, or equivalent experience
Strong SQL skills — including data modelling, query optimisation, and advanced joins / window functions
Power BI expertise — from semantic model design to interactive dashboard creation (Desktop & Service)
Python proficiency — especially using pandas for data manipulation and SQL integration
Comfortable with Git (GitHub, GitLab, or Azure Repos) and basic command‑line tools
Experience working across Windows and Linux environments
Design clear and intuitive Power BI dashboards for teams across sales, operations, and executive leadership.
Develop reusable data models and DAX measures to enable self‑service insights and scalable reporting.
Translate business questions into data‑driven visual stories that support better decision‑making.
Write clean, efficient SQL queries to extract and prepare data from multiple sources.
Optimise stored procedures, views, and data transformations for clarity and performance.
Build and maintain lightweight, reproducible data pipelines using Python and / or Databricks.
Develop scripts to automate ETL, reporting refreshes, or data quality checks using Python (pandas, SQLAlchemy, or PySpark).
Use Git for version control and deploy code using basic CI / CD pipelines when needed.
Document dashboards, datasets, and SQL logic for transparency and maintainability.
Write tests to validate critical SQL queries or Python code and reduce the risk of data errors.
Work closely with senior data scientists, analysts, and business owners to clarify requirements and deliver value quickly.
Proactively identify and raise data quality issues — and suggest practical solutions rather than short‑term fixes.