Enable job alerts via email!

Data Engineer

Hollard

Johannesburg

On-site

ZAR 600 000 - 900 000

Full time

Today
Be an early applicant

Job summary

A leading insurance company in Johannesburg seeks a Data Engineer to design, develop, and maintain robust Big Data Pipelines and data architectures. The ideal candidate will have extensive experience with Azure Data Solutions and data programming languages such as Python and SQL. Responsibilities include ensuring compliance with data classification, building optimal data infrastructures, and collaborating with technology teams to provide viable data solutions.

Qualifications

  • Profound technical expertise in Data Engineering, Data Warehouse Design, and advanced analytics.
  • Experience using data-related programming languages and frameworks.

Responsibilities

  • Build and maintain Big Data Pipelines.
  • Ensure compliance with data classification requirements.
  • Develop and maintain data architectures.
  • Collaborate with data teams to deliver data solutions.
  • Provide OLAP support and end-user training.

Skills

Python
Spark
SQL
Azure Data Factory
Azure Data Explorer
Azure Data Databricks
Data Engineering
Data Warehouse Design
Advanced Analytics
Business Intelligence Tools
Job description
Role Objectives

The primary objective of this role is to design, develop, and maintain robust and scalable Big Data Pipelines and data architectures, ensuring optimal extraction, transformation, and loading of data across multiple application platforms. The Data Engineer will act as a custodian of data, ensuring compliance with information classification requirements, and enabling data consumers to build and optimize data consumption effectively. This role demands profound technical expertise in Data Engineering, Data Warehouse Design, and advanced analytics, utilizing modern software engineering concepts and BI tools. The Data Engineer will leverage Azure Data Solutions and various data-related programming languages and frameworks to analyze data elements, perform root‑cause analysis, and collaborate with technology colleagues and data teams to deliver viable data solutions within architectural guidelines.

Key Responsibilities
  • Responsible for building and maintaining Big Data Pipelines
  • Custodians of data and must ensure that data is shared in line with the information classification requirements on a need-to-know basis
  • Experience using programming skills in data‑related programming languages and frameworks, such as Python, Spark, SQL
  • Experience with Azure Data Solutions: Azure Data Factory, Azure Data Explorer, Azure Databricks
  • Profound technical understanding for Data Engineering and Data Warehouse Design
  • Familiar with modern software engineering concepts and know‑how in advanced analytics and BI tools.
  • Develop and maintain complete data architecture across several application platforms
  • Analyze data elements and systems
  • Build required infrastructure for optimal extraction, transformation and loading of data
  • Build, create, manage and optimize data pipelines
  • Create data tooling, enabling data consumers in building and optimizing data consumption
  • Execute on the design, definition and development of (API's)
  • Develop across several application platforms
  • Experience performing root‑cause analysis on internal and external data and processes
  • Knowledge of integration patterns, styles, protocols and systems
  • Liaise and collaborate with technology colleagues and data teams to understand viable data solutions within architectural guidelines
  • Update technical documentation on data extracts and report functionality to facilitate future understanding to the extent required for ongoing support. Respond to user queries, error logging and further enhancement requests to ensure reports are used and serve their intended purpose.
  • Provide OLAP support and end‑user training on the various cubes used for reporting downstream
  • Data Modelling – emphasizes on what data is needed and how it should be organized instead of what operations need to be performed on the data. Data Model is like architect's building plan which helps to build a conceptual model and set the relationship between data items. This data capability is required in preparation of the Data Platform implementation, and will also align with the approach taken by Group Data to implement ERWIN as a tool of choice for Data Modelling
  • Data Architecture – ensures that we have composed of models, policies, rules or standards that govern which data is collected, and how it is stored, arranged, integrated, and put to use in data systems and in organizations. This data capability is also aligned with the direction that Group Data is taking.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.