Job Search and Career Advice Platform

Enable job alerts via email!

Principal Responsible AI Research Scientist : London, UK

Autodesk

City of Westminster

On-site

GBP 80,000 - 120,000

Full time

Today
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A leading software firm in the UK is seeking a Principal Responsible AI Research Scientist to conduct groundbreaking research on safe and reliable AI systems. You will collaborate with a diverse team on innovative projects across various domains, aiming to advance responsible AI practices. The ideal candidate holds a PhD in a relevant field and has a strong publication record, along with extensive knowledge of generative AI techniques. This role is critical to driving Autodesk's commitment to responsible AI development.

Qualifications

  • PhD with a focus on responsible AI.
  • Track record in responsible AI publications.
  • Hands-on experience with generative AI.

Responsibilities

  • Conduct original research on responsible AI.
  • Develop ML models emphasizing safety and reliability.
  • Collaborate with teams to integrate responsible AI.

Skills

Strong publication track record in responsible AI
Excellent coding skills in Python
Experience in applying responsible AI principles
Strong foundation in machine learning
Collaborative work in cross-functional teams

Education

PhD in Computer Science, AI/ML, or related field

Tools

Python
PyTorch
TensorFlow
JAX
Job description
Principal Responsible AI Research Scientist: London UK

As a Responsible AI Research Scientist at Autodesk Research, you will be at the forefront of developing safe, reliable, and robust AI systems that help our customers imagine, design, and make a better world. You will join a diverse team of scientists, researchers, engineers, and designers working on cutting‑ projects in AI safety, trustworthy AI, and responsible AI practices across domains including design systems, computer vision, graphics, robotics, human‑computer interaction, sustainability, simulation, manufacturing, architectural design, and construction.

Responsibilities
  • Conduct original research on responsible, safe, reliable, robust, and trustworthy AI.
  • Develop new ML models and AI techniques with a strong emphasis on safety, reliability, and responsible implementation.
  • Collaborate with cross‑functional teams and stakeholders to integrate responsible AI practices into Autodesk products and services.
  • Review and synthesize relevant literature on AI safety, reliability, and responsibility to identify emerging methods, technologies, and best practices.
  • Work toward long‑term research goals in AI safety and responsibility while identifying intermediate milestones.
  • Build collaborations with academics, institutions, and industry partners in the field of responsible and safe AI.
  • Explore new data sources and discover techniques for leveraging data in a responsible and reliable manner.
  • Publish papers and present at conferences on topics related to responsible, safe, and reliable AI.
  • Contribute to the strategic thinking on research directions that align with Autodesk's commitment to safe and responsible AI development.
Qualifications
  • PhD in Computer Science, AI/ML, or a related field with a focus on responsible, safe, and reliable AI.
  • Strong publication track record in responsible AI, safe AI, reliable AI, robust AI, or similar domains in top‑tier conferences and/or journals.
  • Extensive knowledge and hands‑on experience with generative AI, including Large Language Models (LLMs), diffusion models, and multimodal models such as Vision‑Language Models (VLMs).
  • Strong foundation in machine learning and deep learning fundamentals.
  • Excellent coding skills in Python, with experience in PyTorch, TensorFlow, or JAX.
  • Demonstrated ability to work collaboratively in cross‑functional teams and communicate complex ideas to both technical and non‑technical stakeholders.
  • Experience in applying responsible AI principles to real‑world problems and products, including 3D machine learning techniques and their applications in design, engineering, or manufacturing.
  • Familiarity with AI governance frameworks and regulatory landscapes.
  • Knowledge of fairness, accountability, transparency, and safety in AI systems.
  • Experience in developing or implementing AI safety measures, such as robustness to distribution shift, uncertainty quantification, or AI alignment techniques.
  • Track record of contributing to open‑source projects related to responsible AI or AI safety.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.