Job Search and Career Advice Platform

Enable job alerts via email!

Senior Data Engineer

Newbridge

Singapore

On-site

SGD 70,000 - 90,000

Full time

Today
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A leading data engineering company in Singapore is seeking an experienced Data Engineer to design, build, and maintain data pipelines for analytics and machine learning. This role involves collaborating with cross-functional teams to create scalable data assets and optimize large-scale data processing. The ideal candidate has over 5 years of experience in data engineering, with deep expertise in Apache Spark and cloud platforms. Competitive salary and benefits offered.

Qualifications

  • 5+ years of production grade data engineering experience.
  • Deep expertise with Apache Spark, Kafka, distributed processing.
  • Hands-on ML pipeline experience including feature engineering, model training, deployment.

Responsibilities

  • Architect end to end batch and Spark Streaming pipelines on cloud.
  • Implement ML feature pipelines and real-time inference services.
  • Optimize peta byte scale processing with Spark, Kafka, and Flink.
  • Build and maintain data warehouses/lakes.
  • Enforce data quality, governance, and security.
  • Develop CI/CD, monitoring, and alerting for pipelines.
  • Mentor engineers and drive best practice documentation.

Skills

Apache Spark (batch & streaming)
Kafka
Machine Learning
SQL
Python
Scala

Tools

AWS
GCP
Azure
Redshift
BigQuery
Snowflake
Job description

Design, build, and maintain high performance data pipelines that power analytics and machine learning products. Collaborate with data scientists, product, and infrastructure teams to turn raw data into scalable, reliable assets.

Key Responsibilities
  • Architect end to end batch and Spark Streaming pipelines on cloud (AWS/GCP/Azure).
  • Implement ML feature pipelines and Realtime inference services.
  • Optimize peta byte scale processing with Spark, Kafka, and Flink.
  • Build and maintain data warehouses/lakes (Redshift, BigQuery, Snowflake).
  • Enforce data quality, governance, and security.
  • Develop CI/CD, monitoring, and alerting for pipelines.
  • Mentor engineers and drive best practice documentation.
Required Experience
  • 5+ years production grade data engineering.
  • Deep expertise with Apache Spark (batch & streaming), Kafka, and distributed processing.
  • Handson ML pipeline experience (feature engineering, model training, deployment).
  • Cloud data platforms and warehousing.
  • Strong SQL + Python/Scala; familiar with Airflow, dbt, or similar.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.