Job Search and Career Advice Platform

Enable job alerts via email!

Data Engineer

NICOLL CURTIN TECHNOLOGY PTE. LTD.

Singapore

On-site

SGD 80,000 - 110,000

Full time

Today
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A Private European Investment Bank in Singapore is seeking a Data Engineer to design and maintain data pipelines and backend services. The ideal candidate will have over 5 years of experience in building end-to-end data systems using modern cloud and streaming technologies, especially in Azure. Responsibilities include defining data requirements and ensuring data governance. This is a 12-month contract that can be converted to permanent.

Qualifications

  • 5+ years of experience in building end-to-end data systems and ETL/ELT pipelines.
  • Hands-on with large-scale data processing using Databricks and Spark.
  • Experience in Azure cloud environments and CI/CD workflows.

Responsibilities

  • Design and maintain real-time data pipelines and backend services.
  • Define data requirements and transformation logic for datasets.
  • Troubleshoot data issues across pipelines and storage layers.

Skills

Data pipeline development
Backend services architecture
SQL proficiency
Data governance and security
Data processing with PySpark

Education

BS/MS in Computer Science or related field

Tools

Azure
Databricks
Airflow
Kubernetes
Docker
Job description

Our client, a Private European Investment Bank is looking for a Data Engineer to design, build, and maintain the data pipelines, backend services, and governance frameworks that power real-time decisioning and analytics across the organisation. This role suits someone who enjoys solving complex data problems, building robust backend systems, and working with modern cloud and streaming technologies in a regulated environment.

Responsibilities
  • Design, develop, and maintain real-time data pipelines, backend services, and workflow orchestration for decisioning, reporting, and data processing.
  • Define data requirements, models, and transformation logic across structured and unstructured data sets.
  • Write high-quality, secure, well-tested code in Python, Scala, or similar languages.
  • Build software and processes that strengthen data governance, data quality, and data security.
  • Support scalable data architectures using Azure, Delta/Live Tables, and distributed computing frameworks.
  • Troubleshoot and resolve data issues across pipelines, storage layers, and downstream systems.
Requirements
  • BS/MS in Computer Science, Data Engineering, Information Systems, or equivalent experience.
  • At least 5+ years building end-to-end data systems, ETL/ELT pipelines, and workflow management using tools such as Airflow, Databricks, or similar.
  • Strong SQL skills for diagnosing data issues and validating complex transformations.
  • Hands‑on experience with large‑scale data processing using Databricks, PySpark, Spark Streaming, and Delta Tables.
  • Experience in Azure cloud data environments, including Data Lake Storage and CI/CD deployment workflows.
  • Familiarity with microservices platforms (Kubernetes, Docker) and event‑driven systems (Kafka, Event Hub, Event Grid, Flink).
  • Experience developing in Python, Scala, JavaScript, Java, or C#.
  • Knowledge of dbt, Data Vault, or Microsoft Fabric is a plus.
  • Prior experience in banking or financial services is highly preferred.

(12‑Month Contract — Convertible to Permanent)

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.