Job Search and Career Advice Platform

Enable job alerts via email!

Research Engineer (CFAR), IHPC

A*STAR RESEARCH ENTITIES

Singapore

On-site

SGD 80,000 - 100,000

Full time

Today
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A research organization in AI in Singapore is seeking a motivated Research Engineer to advance trustworthy AI, emphasizing privacy-preserving methods for adapting large language models. Candidates will conduct cutting-edge research, tune LLMs, and contribute to open-source projects. A degree in Computer Science or a related field, strong programming skills in Python, and experience with LLMs are required. Applications can be sent to provided emails.

Qualifications

  • Degree in Computer Science, Machine Learning, or related field.
  • Strong background in deep learning, natural language processing, or large-scale optimization.
  • Demonstrated experience in working with open-source LLMs.

Responsibilities

  • Conduct research and development of LLM tuning paradigms.
  • Contribute to privacy preserving LLM tuning experiments.
  • Publish high-impact research in top-tier venues.

Skills

Deep learning
Natural language processing
Strong programming skills in Python
Familiarity with ML frameworks
Experience with open-source LLMs
Good communication skills

Education

Degree in Computer Science, Machine Learning, or related field

Tools

PyTorch
HuggingFace Transformers
Job description

A*STAR Centre for Frontier AI Research (A*STAR CFAR) is seeking a motivated and skilled Research Engineer to join our team focused on advancing trustworthy AI, with a specific emphasis on privacy-preserving methods for adapting large language model (LLM) to downstream tasks. The successful candidate will contribute to cutting-edge research on how to effectively tune or adapt LLMs without compromising both data privacy and model privacy.

Key Responsibilities

Successful candidates will be responsible, but not limited to:

  • Conduct research and development of LLM tuning paradigms.
  • Directly contribute to experiments of privacy preserving LLM tuning, including designing experimental details, writing reusable code, running evaluations, and organizing results.
  • Directly contribute to Agentic AI of code generation, compilers, debugging, etc.
  • Evaluate utility, privacy, and efficiency trade-offs among compared baselines.
  • Publish high-impact research in top-tier venues. (e.g., NeurIPS, ICLR, ACL, IEEE S&P).
  • Contribute to open-source tools and frameworks; and potentially guide junior researchers or interns.
Requirements
  • Degree in Computer Science, Machine Learning, or related field.
  • Strong background in deep learning, natural language processing, or large-scale optimization.
  • Demonstrated experience in working with open-source LLMs. (e.g., fine-tuning, instruction tuning, prompt engineering)
  • Familiarity with privacy-preserving machine learning concepts. (e.g., federated learning, synthetic data).
  • Strong programming skills in Python and experience with ML frameworks. (e.g., PyTorch, HuggingFace Transformers).
  • Good written and verbal communication skills.

Please submit your CV, a short research statement (if available) to Yin_haiyan@cfar.a-star.edu.sg and Li_Jing@cfar.a-star.edu.sg.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.