Job Search and Career Advice Platform

Enable job alerts via email!

Big Data Operations Engineer - Hadoop & Spark

Absa Group

Sandton

On-site

ZAR 500 000 - 700 000

Full time

Yesterday
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A leading financial institution seeks a skilled applicant for an application support role focusing on the Hadoop ecosystem. Candidates should possess a Bachelor's degree in Computer Science or related field and have over 2 years of experience in a Big Data environment using tools like Java, Python, and Hadoop. Responsibilities include building and deploying new data pipelines, supporting end-to-end processes, and optimizing systems. Join the team to contribute to critical data operations and development work.

Qualifications

  • 2+ years of experience in a Big Data environment.
  • Experience in database design, development, and data modelling.
  • Knowledge of Hadoop, HDFS, and MapReduce.

Responsibilities

  • Support pipelines end to end.
  • Build enhancements and new developments.
  • Identify optimisation opportunities.

Skills

Big data environment experience
Building big data pipelines
Java
Scala
Python
Hadoop
Apache Spark
Kafka
SQL

Education

Bachelor's degree in Computer Science
Bachelor's degree in Information Systems

Tools

Hadoop ecosystem tools
Linux
SQL
Job description
A leading financial institution seeks a skilled applicant for an application support role focusing on the Hadoop ecosystem. Candidates should possess a Bachelor's degree in Computer Science or related field and have over 2 years of experience in a Big Data environment using tools like Java, Python, and Hadoop. Responsibilities include building and deploying new data pipelines, supporting end-to-end processes, and optimizing systems. Join the team to contribute to critical data operations and development work.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.