Enable job alerts via email!

Specialist Support Engineer: DataOps

Absa Group

Sandton

On-site

ZAR 400,000 - 600,000

Full time

Yesterday
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Start fresh or import an existing resume

Job summary

A leading financial institution is looking for a Specialist Support Engineer: DataOps to be part of their Data Operations team. The role involves managing a team, optimizing data pipelines, and programming with Hadoop. With a focus on enhancing data processes, this position offers the chance to contribute significantly to data operations, supporting critical jobs in a dynamic environment.

Qualifications

  • 3+ years of experience in a Big Data environment.
  • Minimum one year of team management experience.
  • Familiarity with Hadoop ecosystem and its components.

Responsibilities

  • Manage an assigned team and oversee development plans.
  • Support end-to-end data pipelines and oversee enhancements.
  • Responsible for coding and programming Hadoop applications.

Skills

Java
Scala
Python
Hadoop
Apache Spark
Kafka
SQL
Big Data Development
Data Modelling

Education

Bachelor's Degree in Information Technology

Job description

Specialist Support Engineer: DataOps page is loaded

Specialist Support Engineer: DataOps
Apply remote type Hybrid locations Sandton time type Full time posted on Posted 3 Days Ago time left to apply End Date: July 10, 2025 (3 days left to apply) job requisition id R-15977189
Empowering Africa’s tomorrow, together…one story at a time.

With over 100 years of rich history and strongly positioned as a local bank with regional and international expertise, a career with our family offers the opportunity to be part of this exciting growth journey, to reset our future and shape our destiny as a proudly African group.

Job Summary

A Specialist Support Engineer is a professional responsible for programming Hadoop applications and knows about all the components or pieces of the Hadoop Ecosystem, understands how the Hadoop components fit together and has the ability to decide on which is the best Hadoop component for a specific task. In this role, you will be part of the Data Operations team that is responsible for supporting all the Applications on the Hadoop ecosystem. This role expands in maintaining changes on datasets and optimisation activities on all Applications, including new development. They therefore need to understand basic programming to enable them to manage Big Data and to transfer all data to Hadoop.

Job Description

Team Context

Data Engineering is responsible for the central data platform that receives and distributes data across the bank. This is a multi-platform environment and leverages a blend of custom, commercial and open-source tools to manage and support thousands of critical data-related jobs. These jobs are supported and updated in line with changes across the landscape to avoid disruption to downstream data consumers.

Responsibilities

  • Manage an assigned team through day-to-day support tasks.
  • Oversee development plans for the team and provide mentorship to the team.
  • Provide guidance and peer review.
  • Support pipelines end to end.
  • Build and deploy enhancements and new developments or new data pipelines.
  • Identify and drive optimisation opportunities across the environment.
  • Manage the handover of new applications ensuring that required standards and practices are met.
  • Improvement on recovery time in case of prod failures.
  • Test prototypes and oversee handover to the Data Operations teams.
  • Attend and contribute to regular team and User meetings.
  • Responsible for the actual coding or programming of Hadoop applications.
  • High-speed querying.

Job Experience & Skills Required:

  • 3+ years’ experience working in Big data environment, optimising and building big data pipelines, architectures and data sets with e.g. Java, Scala, Python, Hadoop, Apache Spark and Kafka
  • Minimum one year experience with Scala programming language
  • Minimum one year experience managing a team
  • Cross domain knowledge
  • Familiarity with Hadoop ecosystem and its components
  • Good knowledge of the concepts of Hadoop
  • Solid experience in a working environment in Big Data development utilising SQL or Python
  • Experience in Big Data development using Spark
  • Experience in Hadoop, HDFS and MapReduce
  • Experience in database design, development and data modelling

The following additional knowledge, skills and attributes are preferred:

  • Good knowledge in back-end programming, specifically java.
  • Experience with development in a Linux environment and its basic commands.
  • Understanding of Cloud technologies and migration techniques.
  • Understanding of data streaming and the intersection of batch and real time data.
  • Ability to write reliable, manageable, and high-performance code.
  • Should have basic knowledge of SQL, database structures, principles, and theories.
  • Knowledge of workflow/schedulers.
  • Strong collaboration and communication skills.
  • Strong analytical and problem-solving skills.
  • Experience in Quality Assurance.
  • Experience in Stakeholder Management.
  • Experience in Testing.

Education

Bachelor's Degree: Information Technology

Absa Bank Limited is an equal opportunity, affirmative action employer. In compliance with the Employment Equity Act 55 of 1998, preference will be given to suitable candidates from designated groups whose appointments will contribute towards achievement of equitable demographic representation of our workforce profile and add to the diversity of the Bank.

Absa Bank Limited reserves the right not to make an appointment to the post as advertised

Similar Jobs (1)
Support Engineer (Visualisation)
remote type Hybrid locations Sandton time type Full time posted on Posted 4 Days Ago time left to apply End Date: July 9, 2025 (2 days left to apply)

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.