Enable job alerts via email!

Specialist Data Engineer

Absa Group

Gauteng

On-site

ZAR 600 000 - 800 000

Full time

Today
Be an early applicant

Boost your interview chances

Create a job specific, tailored resume for higher success rate.

Job summary

A leading financial institution seeks a Data Engineer to join their innovative team. The role involves developing and optimizing data solutions, ensuring data quality, and contributing to architecture design. Ideal candidates will have strong experience in data engineering, AWS services, and a passion for mentoring others.

Qualifications

  • 5+ years of relevant data and software engineering experience.
  • Proficiency in data pipeline development and AWS services.

Responsibilities

  • Design and deliver data solutions aligned with business requirements.
  • Build analytics tools utilizing data pipelines for optimized data sets.

Skills

Data Engineering
Python
AWS
CI/CD
Data Governance
Data Quality

Education

Bachelor's Degree in Information Technology

Tools

AWS Glue
Hadoop
DataZone

Job description

Empowering Africa's tomorrow, together… one story at a time. With over years of rich history and a strong position as a local bank with regional and international expertise, a career with our family offers the opportunity to be part of an exciting growth journey, to reset our future, and shape our destiny as a proudly African group.

Job Summary

Work embedded as a member of a squad or across multiple squads to produce, test, document, and review algorithms and data-specific source code that supports the deployment and optimization of data retrieval, processing, storage, and distribution for a business area.

Job Description
Key skills and experience required :
  1. 5+ years of relevant data and software engineering experience
  2. Relevant B-Degree in Computer Science preferred but not essential if minimum experience and practical application are evident
  3. Core Data Engineering
  4. Proficiency in data pipeline development (batch, real-time, file-based)
  5. Strong experience with AWS services, especially S3, Glue, Lambda, Step Functions, DataZone
  6. Familiarity with data lakehouse concepts, Hadoop / Hive, or equivalent
  7. Experience ingesting from diverse sources : Databases (RDBMS / NoSQL), APIs, file drops, message queues
  8. Data Distribution and Access
  9. Designing reusable access layers (e.g., APIs, event streams, S3-based delivery)
  10. Implementing secure data access controls and RBAC
  11. Building scalable distribution mechanisms for on-prem and cloud consumers
  12. Metadata, Cataloging, and Governance
  13. Experience with data catalogs (AWS Glue, Ataccama, DataZone, or similar)
  14. Understanding of data lineage, classification, and metadata management
  15. Contributing to or building tooling for data governance and compliance
  16. Software Engineering Best Practices
  17. Clean, modular design with maintainability and automation in mind
  18. Proficiency in Python, C#, or other relevant languages
  19. Experience with CI / CD pipelines, infrastructure as code, version control
  20. Monitoring and Quality
  21. Designing and implementing data quality rules, SLAs, validation, and alerts
  22. Familiarity with centralized logging, observability, and monitoring platforms
  23. Team and Strategy Contribution
  24. Ability to mentor developers and lead peer reviews
  25. Comfortable presenting to technical and non-technical stakeholders
  26. Track record of influencing data standards, architectural decisions, or best practices
  27. Preferred Experience
  28. Experience with centralized ingestion frameworks or orchestrators
  29. Understanding of hybrid data architecture: bridging on-prem and cloud ecosystems
  30. Exposure to reporting platforms or BI tool integration with backend datasets
Key Accountabilities
  • Understand the technical landscape and bank-wide architecture connected to or dependent on the supported business area to design and deliver data solutions.
  • Translate data architecture direction and business requirements into data solution designs.
  • Participate in design thinking processes for data solution blueprints.
  • Leverage relational and NoSQL databases, integration, and streaming platforms for sustainable data solutions.
  • Design data retrieval, storage, and distribution solutions, contributing to all development lifecycle phases.
  • Develop high-quality data processing, retrieval, storage, and distribution designs in a test-driven environment.
  • Build analytics tools utilizing data pipelines for optimized data sets.
  • Create and maintain CI / CD pipelines, automating tasks with tools like Ansible or Chef.
  • Debug source code and enhance features.
  • Assemble large, complex data sets to meet business needs and manage data pipelines.
  • Build infrastructure for high-volume data delivery and create tools for analytics and data science teams.
  • Ensure designs support principles of self-service, scalability, resilience, and automation.
  • Apply design patterns and paradigms to deliver solutions.
  • Support infrastructure build for data extraction, transformation, and loading.
  • Continuously optimize and automate data processes.
  • Ensure quality assurance and testing of data solutions aligned with standards.
  • Implement security standards to ensure data separation, security, and quality.
  • Contribute to aligning solutions with group architecture, standards, and long-term strategies.
  • Monitor performance and optimize data solutions continually.
  • Stay updated on emerging data technologies and best practices.
Education

Bachelor's Degree in Information Technology. Absa Bank Limited is an equal opportunity employer.

Preference will be given to suitable candidates from designated groups to promote diversity and meet employment equity goals. Absa Bank Limited reserves the right not to make an appointment to the advertised post.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.