Enable job alerts via email!

Data Engineer (Databricks, AWS, Python) – JHB – up to R450k per annum

E-Merge

Johannesburg

On-site

ZAR 350 000 - 450 000

Full time

25 days ago

Job summary

A global financial services organization seeks a Data Engineer in Johannesburg. The role involves designing and maintaining scalable data pipelines using Python and AWS services. Applicants should have a Bachelor's degree in a relevant field, proficiency in Python, and experience with AWS and Databricks. The role is permanent and offers a salary range of R350k to R450k per annum based on experience.

Qualifications

  • Proficiency in Python for data manipulation and scripting.
  • Experience with AWS services such as S3, Glue, Lambda, and Redshift.
  • Familiarity with Databricks and Spark for big data processing.
  • Strong SQL skills for querying and managing relational databases.
  • Prior experience in the insurance sector, with an understanding of insurance data structures and regulatory requirements.

Responsibilities

  • Design, build, and maintain scalable data pipelines using Python, AWS services, and Databricks.
  • Integrate structured and unstructured data from various sources.
  • Work closely with data scientists and analysts to deliver solutions for analytics.
  • Implement data validation and cleansing processes.
  • Maintain comprehensive documentation of data workflows.
  • Stay updated with emerging technologies in data engineering.

Education

Bachelor's degree in Computer Science, Information Systems, Engineering, or a related field

Tools

Python
AWS (S3, Glue, Lambda, Redshift)
Databricks
Spark
SQL
Job description

Join a global financial services organisation whose core purpose is to seek out and invest in exceptional individuals who understand and support their core purpose, and whose values align and pride themselves with attracting intellectual leaders who are the best in their respective field. You will be accountable for assisting in designing, building, and maintaining scalable data pipelines using Python, AWS services and Databricks.

Responsibilities
  • Data Pipeline Development: Design, Build, and Maintain Scalable Data Pipelines Using Python, AWS Services (E.G., S3, Glue, Lambda), and Databricks to Support Data Ingestion, Transformation, and Storage.
  • Data Integration: Integrate Structured and Unstructured Data from Various Sources, Including Policy Administration Systems, Claims Databases, and Customer Relationship Management (CRM) Platforms.
  • Collaboration: Work Closely with Data Scientists, Analysts, and Business Stakeholders to Understand Data Requirements and Deliver Solutions That Support Analytics and Reporting Needs.
  • Data Quality Assurance: Implement Data Validation and Cleansing Processes to Ensure the Accuracy and Reliability of Data Used for Decision-Making.
  • Documentation: Maintain Comprehensive Documentation of Data Workflows, Schemas, and Processes to Facilitate Knowledge Sharing and Compliance.
  • Continuous Learning: Stay Updated with Emerging Technologies and Best Practices in Data Engineering to Contribute to The Team's Growth and Innovation.
Qualifications & Experience
  • Bachelors degree in Computer Science, Information Systems, Engineering, or a related field.
  • Proficiency in Python for data manipulation and scripting.
  • Experience with AWS services such as S3, Glue, Lambda, and Redshift.
  • Familiarity with Databricks and Spark for big data processing.
  • Strong SQL skills for querying and managing relational databases.
  • Industry Experience: Prior experience in the insurance sector, with an understanding of insurance data structures and regulatory requirements.

The Reference Number for this position is NG60471 which is a Permanent, Hybrid role in Johannesburg offering a salary of R350k to R450k per annum CTC negotiable based on experience. E-mail Nokuthula.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.