Job Search and Career Advice Platform

Enable job alerts via email!

Senior Data Engineer

NICOLL CURTIN TECHNOLOGY PTE. LTD.

Singapore

On-site

SGD 80,000 - 120,000

Full time

4 days ago
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A European Investment Bank in Singapore is seeking a Data Engineer to design and maintain data pipelines and backend services. The role involves working with complex data problems and modern cloud technologies in a regulated environment. Applicants should have at least 5 years of experience in building data systems and strong skills in Python, SQL, and large-scale data processing technologies. This position offers a 12-month contract with a possibility of becoming permanent.

Qualifications

  • 5+ years of experience building end-to-end data systems and ETL/ELT pipelines.
  • Strong SQL skills for diagnosing data issues and validating transformations.
  • Hands-on experience with large-scale data processing in cloud environments.

Responsibilities

  • Design, develop, and maintain real-time data pipelines and backend services.
  • Define data requirements and transformation logic across data sets.
  • Write secure, high-quality code in Python or similar languages.
  • Support scalable data architectures using Azure and distributed computing.

Skills

Real-time data engineering
Data governance
Data quality
Python
SQL
Large-scale data processing
ETL/ELT
Azure

Education

BS/MS in Computer Science or equivalent

Tools

Airflow
Databricks
PySpark
Kubernetes
Docker
Kafka
Job description

Our client, a Private European Investment Bank is looking for a Data Engineer to design, build, and maintain the data pipelines, backend services, and governance frameworks that power real-time decisioning and analytics across the organisation. This role suits someone who enjoys solving complex data problems, building robust backend systems, and working with modern cloud and streaming technologies in a regulated environment.

Responsibilities
  • Design, develop, and maintain real-time data pipelines, backend services, and workflow orchestration for decisioning, reporting, and data processing.
  • Define data requirements, models, and transformation logic across structured and unstructured data sets.
  • Write high-quality, secure, well-tested code in Python, Scala, or similar languages.
  • Build software and processes that strengthen data governance, data quality, and data security.
  • Support scalable data architectures using Azure, Delta/Live Tables, and distributed computing frameworks.
  • Troubleshoot and resolve data issues across pipelines, storage layers, and downstream systems.
Requirements
  • BS/MS in Computer Science, Data Engineering, Information Systems, or equivalent experience.
  • At least 5+ years building end-to-end data systems, ETL/ELT pipelines, and workflow management using tools such as Airflow, Databricks, or similar.
  • Strong SQL skills for diagnosing data issues and validating complex transformations.
  • Hands-on experience with large-scale data processing using Databricks, PySpark, Spark Streaming, and Delta Tables.
  • Experience in Azure cloud data environments, including Data Lake Storage and CI/CD deployment workflows.
  • Familiarity with microservices platforms (Kubernetes, Docker) and event-driven systems (Kafka, Event Hub, Event Grid, Flink).
  • Experience developing in Python, Scala, JavaScript, Java, or C#.
  • Knowledge of dbt, Data Vault, or Microsoft Fabric is a plus.
  • Prior experience in banking or financial services is highly preferred

(12-Month Contract — Convertible to Permanent)

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.