Enable job alerts via email!

Senior Data Engineer

Keppel Management Ltd

Singapore

On-site

SGD 60,000 - 80,000

Full time

30+ days ago

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Start fresh or import an existing resume

Job summary

A leading data-focused company is looking for a Data Engineer to develop and maintain scalable data pipelines. The role involves leveraging AWS and Python to support increasing data volumes, optimize data models, and drive data accessibility. Ideal candidates have a solid background in data engineering, familiarity with AI platforms, and strong programming skills.

Qualifications

  • 5-6 years of experience in data engineering or similar role.
  • Strong programming skills in Python, SQL, AWS.
  • Good understanding of basic machine learning concepts (Sagemaker).

Responsibilities

  • Develop and maintain scalable data pipelines using Python and AWS services.
  • Collaborate with analytics teams to improve data models for business intelligence.
  • Participate in code reviews and contribute to DevOps/DataOps/MLOps.

Skills

Python
AWS
SQL

Education

Bachelor's degree in Computer Science, Engineering, or related field

Tools

Glue
Airflow
Kafka
Spark
Snowflake
DBT

Job description

Job Description

  • Develop, maintain scalable data pipelines and build out new integrations to support continuing increases in data volume and complexity

  • Develop and maintain scalable, optimized data pipelines leveraging Python and AWS services to support increasing data volume and complexity, while ensuring seamless integration with AI platforms like Bedrock and Google

  • Further enhance data accessibility and drive data-driven decision making by collaborating with analytics and business teams to refine data models for business intelligence tools

  • Develop data models, schemas, and standards that ensure data integrity, quality, and accessibility

  • Develop, maintain, and optimize scalable data pipelines using Python and AWS services (e.g., S3, Lambda, ECS, EKS, RDS, SNS/SQS, Vector DB)

  • Build solutions with AI Services like Bedrock, Google etc.

  • Rapidly developing next-generation scalable, flexible, and high-performance data pipelines

  • Collaborate with analytics and business teams to create and improve data models for business intelligence

  • End-to-end ownership of data quality in our core datasets and data pipelines

  • Participate in code reviews and contribute to DevOps / DataOps / MLOps

Job Requirements:

  • Bachelor's degree in Computer Science, Engineering, or a related field

  • 5-6 years of experience in data engineering or a similar role

  • Strong programming skills in Python, SQL, AWS and related tech stack

  • Experience with building scalable data pipelines with technologies such as Glue, Airflow, Kafka, Spark etc.

  • Experience using Snowflake, DBT, Bedrock is a plus

  • Good understanding of basic machine learning concepts (Sagemaker)

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.