Job Description
About the Role :
The Data Engineer will work on enterprise-wide data provisioning, spanning multiple data governance domains and data assets. Responsibilities include ensuring secure data sharing, adhering to protection and compliance requirements, supporting enterprise Data & Analytics initiatives (including high-priority use cases), and enabling data provisioning for operational processes.
Role Responsibilities :
Data Engineers in this environment are custodians of critical data assets and pipelines. Responsibilities include :
- Building and maintaining large-scale Big Data pipelines on cloud-based data platforms
- Ensuring secure and compliant data sharing aligned with information classification standards
- Supporting enterprise Data & Analytics initiatives and high-priority use cases
- Continuously improving and automating data engineering processes
- Evaluating emerging tools and technologies to drive innovation
- Mentoring and upskilling team members
- Maintaining high-quality technical documentation
Requirements
Essential Skills :
Candidates must demonstrate strong, above-average expertise in :
Cloud & Infrastructure
- Terraform
- Docker
- Linux / Unix
- CloudFormation
- CodeBuild / CodePipeline
- CloudWatch
- SNS
- S3
- Kinesis Streams (Kinesis, Firehose)
- Lambda
- DynamoDB
- Step Functions
- Parameter Store
- Secrets Manager
Programming & Data Engineering
- Python 3.x
- SQL (Oracle / PostgreSQL)
- PySpark
- Boto3
- ETL development
- Big Data platforms
- PowerShell / Bash
Data Platforms & Tools
- Glue
- Athena
- Technical data modelling & schema design (hands‑on, not drag‑and‑drop)
- Kafka
- AWS EMR
- Redshift
Business & Analytics
- Business Intelligence (BI) experience
- Strong data governance and security understanding
Advantageous Skills :
- Advanced data modelling expertise, especially in Oracle SQL
- Strong analytical skills for large, complex datasets
- Experience with testing, data validation, and transformation accuracy
- Excellent documentation, written, and verbal communication skills
- Ability to work independently, multitask, and collaborate within teams
- Experience building data pipelines using AWS Glue, Data Pipeline, or similar
- Familiarity with AWS S3, RDS, and DynamoDB
- Solid understanding of software design patterns
- Experience preparing technical specifications, designing, coding, testing, and debugging solutions
- Strong organisational abilities
- Knowledge of Parquet, AVRO, JSON, XML, CSV
- Experience with Data Quality tools such as Great Expectations
- Experience working with REST APIs
- Basic networking knowledge and troubleshooting skills
- Understanding of Agile methodologies
- Experience with documentation tools such as Confluence and JIRA
Qualifications & Experience
- Relevant IT, Business, or Engineering Degree
- Experience developing technical documentation and artefacts
- Experience with enterprise collaboration tools
Preferred Certifications
- AWS Cloud Practitioner
- AWS SysOps Associate
- AWS Developer Associate
- AWS Architect Associate
- AWS Architect Professional
- HashiCorp Terraform Associate
Requirements
Terraform, Python, Docker, AWS, SQL