Enable job alerts via email!

Snr Specialist : It Systems Developer (Cloud Data Engineer)

Liberty Group Limited

Johannesburg

On-site

ZAR 300 000 - 700 000

Full time

11 days ago

Boost your interview chances

Create a job specific, tailored resume for higher success rate.

Job summary

An established industry player is seeking a Data Engineer to design and implement large-scale data solutions. In this dynamic role, you will work with cutting-edge AWS technologies to build production data pipelines, analyze data warehouses, and support analytical infrastructures. You will collaborate with cross-functional teams to enhance data capabilities and drive insights from complex datasets. This opportunity offers a chance to grow your skills in a collaborative environment while contributing to innovative data solutions that impact business decisions. Join a forward-thinking organization that values diversity and promotes equal opportunities for all.

Qualifications

  • Advanced Data Engineering knowledge and experience with modern data practices.
  • Proficiency in SQL and expertise in writing complex queries.
  • Experience with AWS cloud services and data pipeline management.

Responsibilities

  • Design and implement large scale enterprise data solutions using AWS technologies.
  • Analyze and re-platform data warehouses to AWS cloud services.
  • Collaborate with tech teams for advanced analytics and machine learning.

Skills

Data Engineering
SQL
Python
PySpark
Analytical Skills
Big Data Technologies

Education

Bachelor's Degree in Computer Science
Bachelor's Degree in Information Technology

Tools

AWS Glue
AWS StepFunctions
AWS Redshift
AWS Lambda
AWS Athena
AWS EC2
AWS EMR
AWS RDS
AWS DynamoDB
Control-M

Job description

At Liberty, we employ more than 6,000 people across 7 businesses in 18 African countries.

Every day, our employees grow their knowledge by working with diverse groups of people who specialise in a wide range of skills across insurance, asset management, investment and health products.

We continually seek to engage, develop, recognise and reward the people who make our business great.

Key Responsibilities
  1. Design and implement large scale enterprise data solutions by using a combination of the following technologies – AWS Glue, AWS Step-functions, AWS RedShift, AWS Lambda, AWS Athena, AWS Lakeformation, Spark, Python.
  2. Analyze, re-architect and re-platform on-premise data warehouses to data platforms on AWS cloud using AWS and 3rd party services.
  3. Design and build production data pipelines from ingestion to integration within a big data architecture, using PySpark, Python and SQL.
  4. Design, implement and support an analytical data infrastructure providing ad-hoc access to large datasets and computing power.
  5. Interface with other technology teams to extract, transform, and load data from a wide variety of data sources using SQL and AWS big data technologies.
  6. Continual research of the latest big data and visualization technologies to provide new capabilities and increase efficiency.
  7. Collaborate with other tech teams to implement advanced analytics algorithms that exploit our rich datasets for statistical analysis, prediction, clustering and machine learning.
  8. Help continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers.
Minimum Requirements
  1. Advanced Data Engineering knowledge and experience working with modern data practices.
  2. Experience building / operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets.
  3. Experience working with distributed systems as it pertains to data storage and computing.
  4. Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
  5. Strong analytic skills related to working with unstructured datasets.
  6. Build processes supporting data transformation, data structures, meta data, dependency, and workload management.
Additional Minimum Experience
  1. Proficiency in SQL with expertise in writing and optimizing complex queries.
  2. A successful history of manipulating, processing and extracting value from large, disconnected data sets.
  3. Experience supporting and working with cross-functional teams in a dynamic environment.
  4. Experience in Data Engineering roles.
  5. Experience with the following is a must: AWS Glue, PySpark, SQL.
  6. Experience with data pipeline and workflow management tools like AWS StepFunctions and Control-M would be beneficial but not required.
  7. Experience with AWS cloud services: EC2, EMR, RDS, DynamoDB would be beneficial but not required.
Minimum Qualifications
  1. Bachelor's Degree in Computer Science, Information Technology, or other relevant fields.
  2. Experience in any of the following: AWS Athena, AWS Glue, Pyspark, AWS DynamoDB, AWS Redshift, AWS Lambda and AWS Step Functions.
  3. Proficient in SQL, Python and PySpark.
  4. Proficient in utilizing a cloud platform and services.
  5. Knowledge of software engineering best practices across the development lifecycle, including agile methodologies, coding standards, code reviews, source management, build processes, testing, devops and operations.

Liberty Group Limited is an equal opportunity, affirmative action employer.

In compliance with the Employment Equity Act 55 of 1998 and the group's Transformation Strategy, preference will be given to suitable candidates from designated groups whose appointments will contribute towards the achievement of equitable demographic representation of our workforce profile and add to the diversity of the organisation. The Company's approved Employment Equity Plan and Targets will be considered as part of the recruitment process.

As an Equal Opportunities employer, we actively encourage and welcome people with various disabilities to apply.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.