Enable job alerts via email!
A leading research university in Manchester is seeking a post-doctoral research associate to work on the 'Hardware-Level AI Safety Verification' project. This role involves collaborating with experts to verify AI models at the hardware level, focusing on safety and implementation issues. The ideal candidate will have a strong background in formal methods and machine learning. Excellent benefits include a leading pension scheme and generous leave entitlement.
This 18-months appointment forms part of the project "Hardware-Level AI Safety Verification", funded by the Advanced Research + Invention Agency (ARIA) in partnership with the University of Manchester and the University of Birmingham. The project belongs to the Mathematics for Safe AI funding stream, which aims at assessing how we can leverage mathematics - from scientific world-models to mathematical proofs - to ensure that powerful AI systems interact safely and as intended with real-world systems and populations.
The project "Hardware-Level AI Safety Verification" will address a fundamental semantic mismatch between the formal guarantees produced by neural network verification tools and the actual implementation of neural networks at the hardware level. Specifically, hardware-level effects such as quantisation and sampling are often ignored during the verification of AI models. Yet, they are pervasive phenomena in any engineering application where digital compute platforms interact with the physical world. Their impact on the behaviour of neural network controllers and other AI models acting in a physical environment is not well understood.
The project is a collaborative effort, with academics, post-docs and interns collaborating across universities to build better algorithms, software tools and benchmarks to assess the safety of AI implementations at the software and hardware level. We are recruiting an enthusiastic and collaborative post-doctoral research associate with expertise in formal methods, machine learning, control theory, numerical analysis, or a related discipline, with a strong focus on AI safety and neural network verification. The post holder is expected to work closely with the two principal academic investigators in Manchester and Birmingham.
The role will be based in the Department of Computer Science at the University of Manchester. The department is one of the oldest department of Computer Science in the United Kingdom, and hosts around 60 academic staff. The role will be associated with the Systems and Software Security research group, but interactions with other relevant research groups (Autonomy and Verification, Formal Methods, Machine Learning and Robotics) are expected.
What we offer: