Enable job alerts via email!

Data Engineer (138895)

Nedbank Private Wealth

Johannesburg

On-site

ZAR 450,000 - 700,000

Full time

25 days ago

Boost your interview chances

Create a job specific, tailored resume for higher success rate.

Job summary

Nedbank Private Wealth is looking for a Data Engineer in Johannesburg to enhance data infrastructure and support data initiatives. You will leverage your data expertise to build scalable data solutions and collaborate with a dynamic squad to ensure data quality and accessibility for business value.

Qualifications

  • 5 years experience in data engineering and analytics.
  • Experience with big data technologies and cloud platforms.
  • SAS certification preferred and exposure to Agile methodologies.

Responsibilities

  • Build and manage scalable and secure data infrastructure.
  • Create and maintain data pipelines for ingestion and streaming.
  • Collaborate with various teams to deliver data solutions.

Skills

Data Warehousing
Data Analysis
Data Modeling
Data Pipelines
ETL tools
Communication
Problem Solving

Education

Matric / Grade 12 / National Senior Certificate
Advanced Diplomas / National 1st Degrees
BSc BEng Bcom

Tools

SAS
Hadoop
Spark
AWS
Apache Hive
NoSQL
Kafka

Job description

REQ 138895

TAS : Keabetswe Modise

Closing Date : 08 May 2025

Job Family

Information Technology

Data

Manager of Self Professional

Job Purpose

The purpose of the Data Engineer is to leverage their data expertise and data related technologies in line with the Nedbank Data Architecture Roadmap to advance technical thought leadership for the Enterprise deliver fit for purpose data products and support data initiatives. In addition Data Engineers enhance the data infrastructure of the bank to enable advanced analytics machine learning and artificial intelligence by providing clean usable data to stakeholders. They also create data pipelines Ingestion provisioning streaming self service API and solutions around big data that support the Banks strategy to become a data driven organisation.

Job Responsibilities

  • Responsible for SAS system administration maintenance improvement and application support on SAS platforms.
  • Responsible for the maintenance improvement cleaning and manipulation of data in the banks operational and analytics databases.
  • Data Infrastructure : Build and manage scalable optimised supported tested secure and reliable data infrastructure e.g. using Infrastructure and Databases (DB2 PostgreSQL MSSQL HBase NoSQL etc) Data Lakes Storage (Azure Data Lake Gen 2) Cloudbased solutions (SAS Azure Databricks Azure Data Factory HDInsight) Data Platforms (SAS Azure Cloud). Ensure data security and privacy in collaboration with Information Security CISO and Data Governance
  • Data Pipeline Build (Ingestion Provisioning Streaming and API) : Build and maintain data pipelines to :

Create data pipelines for data integration (Data Ingestion Data Provisioning and Data Streaming) utilising both On Premise tool sets and Cloud Data Engineering tool sets

  • Efficiently extract data (Data Acquisition) from Golden Sources Trusted sources and Writebacks with data integration from multiple sources formats and structures
  • Provide data to the respective Lines of Business Marts Regulatory Marts and Compliance Marts through self service data virtualisation
  • Provide data to applications or Nedbank Data consumers
  • Handle big data technologies and streaming (KAFKA)
  • Drive utilisation of data integration tools and Cloud data integration tools (Azure Data Factory and Azure Data Bricks)
  • Data Modelling and Schema Build : In collaboration with Data Modellers create data models and database schemas on the Data Reservoir Data Lake Atomic Data Warehouse and Enterprise Data Marts.
  • Nedbank Data Warehouse Automation : Automate monitor and improve the performance of data pipelines.
  • Collaboration : Collaborate with Data Analysts Software Engineers Data Modelers Data Scientists Scrum Masers and Data Warehouse teams as part of a squad to contribute to the data architecture detail designs and take ownership of Epics endtoend and ensure that data solutions deliver business value.
  • Data Quality and Data Governance : Ensure that reasonable data quality checks are implemented in the data pipelines to maintain a high level of data accuracy consistency and security.
  • Performance and Optimisation : Ensure the performance of the Nedbank data warehouse integration patterns batch and real time jobs streaming and APIs.
  • API Development : Build APIs that enable the Data Driven Organisation ensuring that the data warehouse is optimised for APIs by collaborating with Software Engineers.

Essential Qualifications NQF Level

  • Matric / Grade 12 / National Senior Certificate
  • Advanced Diplomas / National 1st Degrees

Preferred Qualification

  • Field of Study : BSc BEng Bcom

Preferred Certifications

  • SAS certification
  • Exposure to Agile Methodologies
  • Exposure to Cloud technologies and DevOps

Minimum Experience Level

  • Total number of years of experience : 5 years
  • Experienced at working independently within a squad and has the demonstrated knowledge and skills to deliver data outcomes without supervision.
  • Experience designing building and maintaining data warehouses and data lakes.
  • Experience with big data technologies such as Hadoop Spark and Hive.
  • Experience with programming languages such as Python Java and SQL.
  • Experience with relational databases and NoSQL databases.
  • Experience with cloud computing platforms such as AWS Azure and GCP.
  • Experience with data visualization tools.
  • Resultdriven analytical creative thinker with demonstrated ability for innovative problem solving.

Technical / Professional Knowledge

  • Data Warehousing
  • Data Analysis and Data Modelling
  • Data Pipelines and ETL tools (SAS ETL)
  • Agile Delivery
  • Decision Making
  • Communication
  • Technical / Professional Knowledge and Skills
  • Building Partnerships

Please contact the Nedbank Recruiting Team at

Key Skills

Apache Hive,S3,Hadoop,Redshift,Spark,AWS,Apache Pig,NoSQL,Big Data,Data Warehouse,Kafka,Scala

Employment Type : Full Time

Experience : years

Vacancy : 1

Create a job alert for this search

Data Engineer • Johannesburg, Gauteng, South Africa

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.