Enable job alerts via email!

AWS Cloud Data Engineer

EPS Consultants

Singapore

On-site

SGD 70,000 - 110,000

Full time

27 days ago

Boost your interview chances

Create a job specific, tailored resume for higher success rate.

Job summary

A leading company is seeking an AWS ETL Cloud Data Engineer responsible for designing and maintaining scalable data pipelines in AWS. The role requires expertise in AWS services and data management best practices, with a focus on building data lakes and optimizing ETL processes. Candidates should have a Bachelor's degree in Computer Science or IT and extensive experience with AWS tools.

Qualifications

  • 5+ years of experience with ETL, Data Modeling, Data Architecture.
  • Proficient in ETL optimization and big data processes using PySpark.
  • 3+ years of experience on AWS platform using core services.

Responsibilities

  • Design and operationalize large-scale enterprise data solutions using AWS.
  • Build production ETL data pipelines from ingestion to consumption.
  • Analyze and re-architect on-premise data warehouses to AWS cloud.

Skills

ETL
Data Modeling
Data Architecture
AWS Athena
Glue PySpark
Redshift
RDS-PostgreSQL
S3
Airflow
PySpark

Education

Bachelor's Degree in Computer Science
Bachelor's Degree in Information Technology

Job description

Job Title: AWS ETL Cloud Data Engineer

Job Overview:

The AWS Cloud Data Engineer will be responsible for designing, building, and maintaining scalable data pipelines and data infrastructure in the AWS cloud environment. This role requires expertise in AWS services, data modeling, ETL processes, and a keen understanding of best practices for data management and governance.

Key Responsibilities:
  1. Design, build, and operationalize large-scale enterprise data solutions and applications using AWS data and analytics services in combination with third-party tools – including Spark/Python on Glue, Redshift, S3, Athena, RDS-PostgreSQL, Airflow, Lambda, DMS, Code Commit, Code Pipeline, Code Build, etc.
  2. Design and build production ETL data pipelines from ingestion to consumption within a big data architecture, using DMS, DataSync, and Glue.
  3. Understand existing applications (including on-premise Cloudera Data Lake) and infrastructure architecture.
  4. Analyze, re-architect, and re-platform on-premise data warehouses to data platforms on AWS cloud using AWS or third-party services.
  5. Design and implement data engineering, ingestion, and curation functions on AWS cloud using native AWS services or custom programming.
  6. Perform detailed assessments of current data platforms and create transition plans to AWS cloud.
  7. Collaborate with development, infrastructure, and data center teams to define Continuous Integration and Continuous Delivery processes following industry standards.
  8. Work on hybrid Data Lake environments.
  9. Coordinate with multiple stakeholders to ensure high standards are maintained.
Mandatory Skill-set:
  • Bachelor's Degree in Computer Science, Information Technology, or related fields.
  • 5+ years of experience with ETL, Data Modeling, Data Architecture to build Data Lakes. Proficient in ETL optimization, designing, coding, and tuning big data processes using PySpark.
  • 3+ years of extensive experience working on AWS platform using core services like AWS Athena, Glue PySpark, Redshift, RDS-PostgreSQL, S3, and Airflow for orchestration.
Good to Have Skills:
  • Fundamentals of the Insurance domain.
  • Functional knowledge of IFRS17.
  • Understanding of 14 days AL (Accumulated Leave).
  • Knowledge of company insurance benefits.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.