Job Search and Career Advice Platform

Enable job alerts via email!

Data Scientist

FPT Asia Pacific

Singapore

On-site

SGD 80,000 - 120,000

Full time

Today
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A leading technology firm in Singapore is seeking an experienced ML/Data Scientist to own the data and model lifecycle, ensuring evaluation, guardrail, and testing features. The ideal candidate will have over 3 years of experience delivering ML solutions, strong Python skills, and familiarity with Responsible AI principles. You will design evaluation frameworks and implement optimisation strategies to enhance system efficiency. This role offers an opportunity to bridge research and production services in a dynamic environment.

Qualifications

  • 3+ years delivering end-to-end ML/data science solutions.
  • Hands-on with LLM integration patterns.
  • Ability to write production-quality, testable code.

Responsibilities

  • Design/maintain evaluation pipelines for safety and robustness.
  • Implement prompt/model optimisation strategies.
  • Develop automated benchmarking harnesses.

Skills

Strong Python
PyTorch
TensorFlow
Responsible AI
Statistical validity
Clear communication
Job description

You own the data & model lifecycle for evaluation, guardrail, and testing features: turning exploratory research into production services, designing evaluation harnesses, curating / generating datasets, and instrumenting continuous risk & quality monitoring across Litmus and Sentinel. You are the bridge between rapid Responsible AI experimentation and reliable platform delivery.

Job Responsibilities
  • Design/maintain evaluation pipelines (batch & on-demand) for safety, robustness, fairness, leakage, and regression drift.
  • Implement prompt / model optimisation strategies (quantisation, caching, dynamic routing, selective execution) to hit latency & cost budgets.
  • Develop automated benchmarking harnesses integrating internal & external suites (jailbreak, prompt injection, harassment, PII, off-topic, leakage).
  • Define graduation criteria + sign-off checklist for moving a prototype to GA (coverage, bias metrics, drift tolerance, alert thresholds).
  • Build monitoring & alerting (precision / recall, calibration, balance, cost, latency) and drive remediation playbooks.
  • Support CI/CD with golden sets, seeded adversarial test packs, and safety regression gates blocking non-compliant releases.
Qualifications
  • 3+ years (or demonstrably equivalent) delivering end-to-end ML / data science solutions (scoping data modelling deployment monitoring).
  • Strong Python (data tooling, modern packaging, async patterns) plus PyTorch / TensorFlow or equivalent.
  • Hands‑on with LLM integration patterns (prompt engineering, evaluation, fine‑tuning / adapters, or RAG pipelines).
  • Applied understanding of Responsible AI (safety, robustness, fairness, privacy) and how to operationalise metrics (e.g. drift, guardrail precision/recall).
  • Familiarity with vector stores, embedding generation, and prompt/output tracing or observability frameworks.
  • Sound experimental design (statistical validity, variance reduction, confidence thresholds).
  • Ability to write production‑quality, testable code; effective code review participation.
  • Clear communication of technical risk & metric trade‑offs to product stakeholders.
Preferred / Bonus Qualifications
  • Experience implementing guardrail or policy frameworks (e.g. Llama Guard, NeMo Guardrails, LLM‑Guard, custom classifiers).
  • Prior work on adversarial / red‑team datasets, jailbreak detection, or toxicity / leakage mitigation.
  • Knowledge of model compression (quantisation, distillation) and GPU / accelerator optimisation.
  • Hands‑on with RLHF / DPO / preference optimisation or synthetic data generation pipelines.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.