Job Search and Career Advice Platform

Enable job alerts via email!

Research Engineer - Brain-Inspired AI Systems (Contractor)

Huawei

City of Westminster

On-site

GBP 40,000 - 60,000

Full time

Yesterday
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A leading technology firm in the United Kingdom is seeking two talented Research Engineers to develop efficient large language model architectures inspired by the human brain. This role involves implementing advanced memory systems and computational infrastructures for next-generation AI models. Successful candidates will collaborate with research scientists and international teams, focusing on efficient algorithm design, model optimization, and performance evaluation. The ideal candidates will have a Master's degree in a related field and strong programming skills in Python.

Qualifications

  • Master's degree in Computer Science, Computer Engineering, or related field.
  • Strong programming skills in Python.
  • Experience with PyTorch.
  • Good understanding of transformer architectures and attention mechanisms.
  • Experience implementing and/or optimizing deep learning models.
  • Experience with large language models.
  • Track record of efficient implementation of research papers.
  • Knowledge of model optimization techniques and best practices.
  • Experience with distributed training and inference.
  • Understanding of memory optimization in deep learning.
  • Background in hardware acceleration (GPU programming).

Responsibilities

  • Design and implement efficient algorithms for large-scale language models.
  • Create high-performance solutions for model deployment and inference.
  • Work closely with research scientists to implement novel algorithmic approaches.
  • Collaborate with international teams on system integration and optimization.
  • Contribute to technical documentation, scientific papers and patent applications.
  • Lead performance evaluation and system benchmarking performance.

Skills

Strong programming skills in Python
Experience with PyTorch
Good understanding of transformer architectures
Experience implementing and optimizing deep learning models
Knowledge of model optimization techniques
Experience with distributed training and inference
Understanding of memory optimization in deep learning
Background in hardware acceleration

Education

Master's degree in Computer Science or related field
Job description

We are seeking two talented Research Engineers to join our innovative project on developing efficient and scalable large language model (LLM) architectures taking inspiration by the human brain. This role will focus on implementing and optimizing advanced memory systems and computational infrastructure for next-generation AI models. The successful candidates will work at the intersection of deep learning systems, neuroscience and hardware optimization, contributing to the development of highly efficient AI solutions and next-generation agenting LLMs.

This job description is only an outline of the tasks, responsibilities and outcomes required of the role. The jobholder will carry out any other duties as may be reasonably required by his/her line manager. The job description and personal specification may be reviewed on an ongoing basis in accordance with the changing needs of Huawei Research and Development UK Limited.

Responsibilities
  • Design and implement efficient algorithms for large-scale language models
  • Create high-performance solutions for model deployment and inference
  • Work closely with research scientists to implement novel algorithmic approaches
  • Collaborate with international teams on system integration and optimization
  • Contribute to technical documentation, scientific papers and patent applications
  • Lead performance evaluation and system benchmarking performance
Qualifications
  • Master's degree in Computer Science, Computer Engineering, or related field (or equivalent industry experience).
  • Strong programming skills in Python.
  • Experience with PyTorch.
  • Good understanding of transformer architectures and attention mechanisms.
  • Experience implementing and/or optimizing deep learning models.
  • Experience with large language models.
  • Track record of efficient implementation of research papers.
  • Knowledge of model optimization techniques and best practices.
  • Experience with distributed training and inference.
  • Understanding of memory optimization in deep learning.
  • Background in hardware acceleration (GPU programming).
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.