Enable job alerts via email!

Data Engineer

DYMON ASIA CAPITAL (SINGAPORE) PTE. LTD.

Singapore

On-site

SGD 80,000 - 120,000

Full time

Today
Be an early applicant

Job summary

A leading financial services firm in Singapore is seeking an experienced Data Engineer to architect and optimize data pipelines. The role involves collaboration with PMs and systematic teams to ensure data quality and delivery. Candidates should have 4–10 years of experience, coding skills in Python and SQL, and familiarity with cloud platforms like Azure. Join us to help build a resilient data infrastructure that powers investment strategies and AI-driven insights.

Qualifications

  • 4–10 years of hands-on experience in data engineering.
  • Experience with lakehouse architectures and open table formats.
  • Strong communication skills in fast-paced, collaborative settings.

Responsibilities

  • Design, build, and optimize scalable data pipelines.
  • Collaborate with PMs to deliver timely, trustworthy data.
  • Drive ETL/ELT automation using cloud-native systems.

Skills

Python
SQL
Distributed systems
High-performance data infrastructure
Data governance

Education

Bachelor's or Master's in Computer Science or related field

Tools

Spark
Kafka
Hadoop
Azure
Databricks
Snowflake
Redshift
Job description

We’re rebuilding our technology and data platform from the ground up—and we’re looking for superstar Data Engineers to help lead the change. If you’re a curious, creative problem solver with 4–10 years of hands-on experience, this is your chance to shape the pipelines that power everything from multi-asset investment strategies to AI-driven insights.

You’ll architect the data backbone that fuels our research, risk systems, and execution workflows. If you thrive in high-performance environments and love turning raw data into reliable, scalable infrastructure, we want to hear from you.

Responsibilities
  • End-to-End Pipelines: Design, build, and optimize scalable data pipelines for structured and unstructured financial datasets.
  • Core Data Delivery: Collaborate with discretionary PMs to deliver timely, trustworthy data for decision-making.
  • Quant-Ready Infrastructure: Partner with systematic teams to shape data for advanced analytics and modelling.
  • API Innovation: Help evolve our data delivery platform with user-friendly APIs accessible via Excel, Python, and more.
  • Quality at Scale: Implement monitoring, validation, and remediation frameworks to ensure data accuracy and consistency.
  • Governance Advocacy: Champion standards, lineage, metadata, and security across the data ecosystem.
  • Automation & Efficiency: Drive ETL/ELT automation using cloud-native and distributed systems.
  • Vendor Integration: Work with internal and external data providers to onboard and manage critical data assets.
Qualifications
  • A Bachelor’s or Master’s in Computer Science, Engineering, or a related field.
  • Strong coding skills in Python and SQL, plus experience with distributed systems like Spark, Kafka, or Hadoop.
  • Deep knowledge of lakehouse architectures and open table formats (Iceberg, Delta Lake, Parquet).
  • Hands-on experience with cloud platforms (Azure preferred) and modern data warehouses (Databricks, Snowflake, Redshift).
  • A proven track record of building resilient data infrastructure in high-performance or financial environments following CI/CD.
  • Familiarity with financial data nuances—traditional vs alternative, structured vs unstructured, batch vs real-time—and a sharp eye for point-in-time modelling.
  • A detail-oriented mindset, ownership mentality, and strong communication skills in fast-paced, collaborative settings.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.