Enable job alerts via email!

Data Engineer

Keppel Management Ltd

Singapore

On-site

SGD 60,000 - 90,000

Full time

3 days ago
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Start fresh or import an existing resume

Job summary

A leading company in data solutions is searching for a Data Engineer to enhance and maintain scalable data pipelines. This role involves collaborating with analytics teams and ensuring data quality while leveraging technologies like AWS and Python. The ideal candidate will possess a Bachelor's degree and relevant experience in data engineering, driving data-driven decision-making across the organization.

Qualifications

  • Bachelor's degree in Computer Science, Engineering or related.
  • 2-3 years experience in data engineering or similar.
  • Strong in Python, SQL, AWS, and related tech.

Responsibilities

  • Develop and maintain scalable data pipelines using Python and AWS.
  • Collaborate with analytics teams to improve data models.
  • Participate in code reviews and contribute to DataOps.

Skills

Python
SQL
AWS
Data Engineering
Machine Learning concepts

Education

Bachelor's degree in Computer Science, Engineering, or a related field

Tools

Glue
Airflow
Kafka
Spark
Snowflake
DBT
Bedrock

Job description

Job Description

  • Develop, maintain scalable data pipelines and build out new integrations to support continuing increases in data volume and complexity

  • Develop and maintain scalable, optimized data pipelines leveraging Python and AWS services to support increasing data volume and complexity, while ensuring seamless integration with AI platforms like Bedrock and Google. Further enhance data accessibility and drive data-driven decision making by collaborating with analytics and business teams to refine data models for business intelligence tools

  • Develop, maintain, and optimize scalable data pipelines using Python and AWS services (e.g., S3, Lambda, ECS, EKS, RDS, SNS/SQS, Vector DB)

  • Rapidly developing next-generation scalable, flexible, and high-performance data pipelines

  • Collaborate with analytics and business teams to create and improve data models for business intelligence

  • End-to-end ownership of data quality in our core datasets and data pipelines

  • Participate in code reviews and contribute to DevOps / DataOps / MLOps

Job Requirements

  • Bachelor's degree in Computer Science, Engineering, or a related field

  • 2-3 years of experience in data engineering or a similar role

  • Strong programming skills in Python, SQL, AWS and related tech stack

  • Experience with building scalable data pipelines with technologies such as Glue, Airflow, Kafka, Spark etc.

  • Experience using Snowflake, DBT, Bedrock is a plus

  • Good understanding of basic machine learning concepts (Sagemaker)

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.