Enable job alerts via email!
Generate a tailored resume in minutes
Land an interview and earn more. Learn more
Join the School of Digital Arts at Manchester Metropolitan University as you participate in cutting-edge research on the Emote VR Voicer project. You'll develop innovative VR applications that utilize AI for emotional detection, engaging in a multidisciplinary environment to redefine audiovisual experiences.
The School of Digital Arts is a purpose built, interdisciplinary school at one of the UK's leading universities. Offering industry and research informed courses and specialist spaces with the latest technologies. The School of Digital Arts is a proud part of Manchester Metropolitan University. We build on the creative, science, tech and business strengths of a university whose research is rated as ‘world-leading' and is changing the way we live, work, learn and play.
Working within the School of Digital Arts (SODA) you will join state-of-the-art research on the AHRC funded Emote VR Voicer project, to develop new intelligently responsive VR apps that incorporate speech recognition and meaning classification. You will help create a system that can detect live emotional content in the spoken (or sung) word, mapping this to visual animations based on sample banks of 3D artworks.
AI systems are increasingly able to detect a speaker's emotions, leading to a new affective channel that can be explored in art. The controls available in standard Virtual Reality (VR) can be supplemented with speech recognition, natural language processing, and sentiment analysis. We aim to embody this potential in the front end of a new audiovisual interface, which would translate detected meanings of utterances to the morphing of abstract 3D animated shapes, enabling a radical new aesthetic experience.
About the role:
You will be working closely with the project team consisting of voice specialists, artists and AI researchers in an iterative development cycle. Your programming skills will be utilised to develop VR interfaces that use AI to detect and tag emotional meaning from audio, generating real-time visuals. Image synthesis, procedural content generation and style transfer will further expand on a bank of 3D graphics that will be created specifically for this project. The work will include reworking 3D scans of sculptures into skinned, rigged and textured low poly 3D models suitable for use in the Games Engine. You will also be involved in some of the evaluation work.
The job will be for 2.5 days per week (0.5 FT) on a fixed-term basis for a maximum of 21 months. The working pattern will be mostly on-campus with some remote working possible depending on project stage.
About you:
Key skills:
Essential skills and experience:
Desirable:
Interviews will take place 15th July
To apply, please submit your CV, a cover letter explaining how you meet the criteria and two named references via our application portal. If you would like to discuss the role, please email Adinda at: A.vant.Klooster@mmu.ac.uk
Manchester Met University is committed to creating an intentionally inclusive culture of belonging that promotes equity and celebrates diversity. We understand the importance of having a diverse workforce and the benefits it can bring to ensuring diversity of thought and innovation in everything we do. We, therefore, encourage applications from our local and international communities, in particular people from ethnic minority groups, disabled people and people who identify as LGBTQIA+.