Principal Responsible AI Research Scientist: London UK
As a Responsible AI Research Scientist at Autodesk Research, you will be at the forefront of developing safe, reliable, and robust AI systems that help our customers imagine, design, and make a better world. You will join a diverse team of scientists, researchers, engineers, and designers working on cutting‑ projects in AI safety, trustworthy AI, and responsible AI practices across domains including design systems, computer vision, graphics, robotics, human‑computer interaction, sustainability, simulation, manufacturing, architectural design, and construction.
Responsibilities
- Conduct original research on responsible, safe, reliable, robust, and trustworthy AI.
- Develop new ML models and AI techniques with a strong emphasis on safety, reliability, and responsible implementation.
- Collaborate with cross‑functional teams and stakeholders to integrate responsible AI practices into Autodesk products and services.
- Review and synthesize relevant literature on AI safety, reliability, and responsibility to identify emerging methods, technologies, and best practices.
- Work toward long‑term research goals in AI safety and responsibility while identifying intermediate milestones.
- Build collaborations with academics, institutions, and industry partners in the field of responsible and safe AI.
- Explore new data sources and discover techniques for leveraging data in a responsible and reliable manner.
- Publish papers and present at conferences on topics related to responsible, safe, and reliable AI.
- Contribute to the strategic thinking on research directions that align with Autodesk's commitment to safe and responsible AI development.
Qualifications
- PhD in Computer Science, AI/ML, or a related field with a focus on responsible, safe, and reliable AI.
- Strong publication track record in responsible AI, safe AI, reliable AI, robust AI, or similar domains in top‑tier conferences and/or journals.
- Extensive knowledge and hands‑on experience with generative AI, including Large Language Models (LLMs), diffusion models, and multimodal models such as Vision‑Language Models (VLMs).
- Strong foundation in machine learning and deep learning fundamentals.
- Excellent coding skills in Python, with experience in PyTorch, TensorFlow, or JAX.
- Demonstrated ability to work collaboratively in cross‑functional teams and communicate complex ideas to both technical and non‑technical stakeholders.
- Experience in applying responsible AI principles to real‑world problems and products, including 3D machine learning techniques and their applications in design, engineering, or manufacturing.
- Familiarity with AI governance frameworks and regulatory landscapes.
- Knowledge of fairness, accountability, transparency, and safety in AI systems.
- Experience in developing or implementing AI safety measures, such as robustness to distribution shift, uncertainty quantification, or AI alignment techniques.
- Track record of contributing to open‑source projects related to responsible AI or AI safety.