Job Search and Career Advice Platform

Enable job alerts via email!

Computer Vision Scientist - Healthcare

RESARO SINGAPORE PTE. LTD.

Singapore

On-site

SGD 60,000 - 80,000

Full time

Yesterday
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A leading AI assurance company in Singapore is seeking a Responsible AI Scientist to evaluate and test radiology-focused computer vision systems. The role involves designing evaluation pipelines for models analyzing chest X-rays and developing standardized evaluation procedures. Candidates should have a Bachelor's or Master's degree in a related field and strong skills in Python and machine learning frameworks. This position offers the chance to directly improve the safety and effectiveness of medical AI in healthcare.

Qualifications

  • 1+ years of experience building or evaluating ML models, ideally with exposure to computer vision.
  • Strong analytical mindset and attention to detail.
  • Interest in working with medical imaging datasets, e.g., DICOM, PACS exports.

Responsibilities

  • Design and execute evaluation pipelines for radiology AI models.
  • Work with senior scientists to design standardised evaluation procedures.
  • Prepare detailed reports explaining methodology and model behaviour.

Skills

Python
Data analysis
Machine learning frameworks
Statistical analysis
Analytical mindset

Education

Bachelor’s or Master’s in Computer Science, Data Science, Biomedical Engineering, Machine Learning

Tools

PyTorch
TensorFlow
OpenCV
Pandas
NumPy
SciPy
Jupyter
Job description
Overview

We are a specialised AI assurance and testing company that validates medical AI systems across accuracy, robustness, explainability, fairness, reproducibility, and safety. Our work supports hospitals, regulators, startups, and medical device companies in ensuring clinical-grade model performance.

As healthcare AI adoption accelerates, the next decade demands strong evaluation and assurance practices to ensure clinical safety, reliability, and trust. Most organisations lack the expertise to objectively test medical AI systems—particularly those used in radiology where diagnostic accuracy and robustness directly impact patient outcomes.

We are seeking a Responsible AI Scientist with an interest in evaluating and testing radiology-focused computer vision systems. You will help design and run evaluation pipelines for models analysing chest X-rays, bone fractures, mammography, and other high-stakes imaging tasks. This role is ideal for someone with a strong analytical foundation who wants to contribute directly to improving the safety of medical AI.

Responsibilities

YOU WILL:

  • Model Evaluation & Testing
    • Design and execute evaluation pipelines for radiology AI models, including: Disease detection (classification); Disease localisation (bounding boxes, segmentation maps); Multi-dimension assessments (age, ethnicity, scanner type, image quality, pathology prevalence).
    • Develop dataset sampling strategies aligned with global healthcare AI standards (e.g., FDA Good Machine Learning Practice, WHO Ethics & Governance, EU AI Act medical requirements).
    • Run statistical performance analysis including: Confidence intervals; Statistical significance tests; Case-level error classification; Reader–model comparisons (e.g., ROC, FROC, kappa).
    • Conduct error analysis to identify systematic failure modes (e.g., positioning artefacts, metal implants, low-quality scans).
  • Testing Framework & Process Development
    • Work with senior scientists to design and maintain standardised evaluation procedures for medical AI, including: Ground-truth creation and validation; Annotation workflows with radiologists; Bias and subgroup analysis; Robustness and stress testing (image noise, rotations, contrast changes, occlusions); Clinical safety risk assessments tailored to modality and disease type.
    • Implement reproducible pipelines using Python, CV libraries, and statistical frameworks.
  • Cross-functional Collaboration
    • Work closely with Responsible AI Scientists, radiologists, and software engineers to ensure evaluation quality.
    • Support client engagements by preparing detailed reports explaining methodology, model behaviour, and risks.
    • Contribute to the maintenance and expansion of our proprietary medical imaging datasets for testing.
  • Documentation & Research
    • Document testing methodologies, evaluation protocols, and results in a clear, audit-ready format.
    • Track latest research in: Radiology AI; Model robustness testing; Explainability for medical imaging; Global medical AI regulation and standards.
    • Contribute to our internal knowledge base and product roadmap for healthcare AI testing.
  • YOU ARE ABLE
    • Think from first principles and analyse problems across technical, statistical, clinical, and ethical dimensions.
    • Communicate complex model behaviour clearly and respectfully to both technical and non-technical stakeholders.
    • Work comfortably in fast-paced, ambiguous environments where requirements evolve rapidly.
    • Explore new ideas independently while staying focused on mission-critical objectives.
Required Qualifications
  • Bachelor’s or Master’s in Computer Science, Data Science, Biomedical Engineering, Machine Learning, or related field.
  • 1+ years of experience building or evaluating ML models, ideally with exposure to computer vision.
  • Strong Python skills and familiarity with machine learning/CV frameworks (PyTorch, TensorFlow, scikit-learn, OpenCV).
  • Experience with data analysis using Pandas, NumPy, SciPy, and Jupyter.
  • Understanding of basic statistics: confidence intervals, significance tests, metrics such as AUC, sensitivity/specificity.
  • Interest in working with medical imaging datasets (e.g., DICOM, PACS exports).
  • Strong analytical mindset and attention to detail.
Preferred Qualifications
  • Experience with radiology data (chest X-ray, CT, mammography, MSK imaging).
  • Knowledge of medical imaging standards: DICOM, IHE profiles, FDA GMLP.
  • Exposure to medical AI evaluation tools (MONAI, MedPy, Pydicom).
  • Understanding of explainability techniques for CV models (Grad-CAM, Integrated Gradients).
  • Experience running robustness, bias, or fairness tests on CV models.
  • Familiarity with cloud environments, MLOps workflows, and reproducible ML pipelines.
Who We Are

We are a mission-driven venture focused on making AI safe for the world, starting with one of its most critical domains—healthcare. We believe AI can transform clinical workflows, but only when deployed with rigorous testing and governance. Our team works at the intersection of engineering, clinical science, data governance, and AI assurance to ensure medical AI systems are safe, accurate, and trustworthy.

We are first-principles thinker-doers who value thoughtful technology development and strong ethical grounding. We partner with healthcare providers, regulators, and innovators to build AI systems that genuinely improve patient outcomes.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.