Enable job alerts via email!

Data Engineer

Eqplus Technologies (Pty) Ltd

Johannesburg

On-site

ZAR 600 000 - 800 000

Full time

3 days ago
Be an early applicant

Boost your interview chances

Create a job specific, tailored resume for higher success rate.

Job summary

A leading company in Johannesburg is seeking an experienced Data Engineer to maintain their data warehouse and develop ETL solutions using AWS. The ideal candidate will have a strong background in data management and cloud technologies, ensuring optimal performance and technical leadership in data processing and analysis.

Qualifications

  • 6-7 years of experience in a similar role.
  • Strong SQL background.

Responsibilities

  • Design and develop ETL pipelines using AWS services.
  • Conduct root cause analysis on production issues.
  • Manage data from different sources.

Skills

AWS
Python
SQL
Data Warehousing

Education

Degree in Information Technology

Tools

Talend
SSIS
SSAS
SSRS
Clover ETL
Hadoop
Spark
Jupyter Notebook

Job description

Job Reference : GTG-KV-1

Responsible for maintaining the data warehouse through the design and implementation of ETL / ELT methodologies and technologies, as well as providing maintenance and support to our ETL and ML environments.

To ensure optimal performance, the candidate will conduct root cause analysis on production issues and provide technical leadership throughout the entire information management process of both structured and unstructured data.

Duties & Responsibilities
  1. Solution design and development of various functionalities in AWS for the project flow.
  2. Develop and maintain automated ETL pipelines (with monitoring) using scripting languages such as Python, SQL, and AWS services like S3, Glue, Lambda, SNS, Redshift, SQS, KMS.
  3. Develop Glue Jobs for batch data processing and create Glue Catalog for metadata synchronization.
  4. Develop data pipelines using AWS Lambda and Step Functions for data processing.
  5. Manage data coming from different sources.
  6. Experience with AWS services to manage applications in the cloud and create or modify instances.
  7. Implement solutions using the Scaled Agile Framework (SAFe).
  8. Be involved in the performance and optimization of existing algorithms in Hadoop using Spark Context.
  9. Create Hive Tables, load data, and write Hive queries.
Requirements
  • Degree in Information Technology.
  • 6-7 years of experience in a similar role.
  • Experience with Talend.
  • Knowledge of SSIS, SSAS, and SSRS.
  • Experience with Clover ETL.
  • Strong SQL background.
  • Experience with AWS.
  • Familiarity with Jupyter Notebook.
  • Experience in Data Warehousing and Cloud Warehousing.

CVs should be submitted directly to [contact information].

If you do not receive communication within 2 weeks of your application, kindly consider your application unsuccessful.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.