- Collaborate with analysts, developers, architects, and business stakeholders to understand data needs and deliver technical solutions.
- Design, build, and maintain data pipelines and integrations using AWS services such as S3, Glue, Lambda, and Redshift.
- Develop and manage data lakes and data warehouses on AWS.
- Support and maintain production and non-production data environments.
- Optimize data storage and query performance through schema design and efficient data processing.
- Implement CI / CD practices for data infrastructure, including monitoring, logging, and alerting.
- Ensure data quality, security, and governance across all stages of the data lifecycle.
- Document data models, pipelines, and architecture for internal use and knowledge sharing.
- Stay current with AWS data services and best practices.
- Contribute to a culture of continuous improvement and knowledge sharing within the team
What we are looking for :
- Completed Bachelors degree in Computer Science, Engineering, or a related field (or equivalent experience).
- 5 - 7 years of experience in data engineering.
- 3 years of hands-on experience with AWS, including :
- S3, Glue, Spark, Athena, Redshift, RDS, Lambda, Lake Formation
- Strong SQL skills and experience with relational databases (e.g., PostgreSQL, Oracle, RDS).
- Proficiency in Python or Scala for data processing.
- Familiarity with infrastructure-as-code tools (e.g., Terraform, CloudFormation).
- Understanding of data governance, security, and compliance in cloud environments.
Please note that if you do not hear from us within 3 weeks, consider your application unsuccessful.