Enable job alerts via email!

AI Engineering Researcher

ZipRecruiter

London

On-site

GBP 60,000 - 80,000

Full time

23 days ago

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Start fresh or import an existing resume

Job summary

A leading technology and data engineering firm in London is seeking an AI Engineering Researcher. This role involves designing and developing ETL processes, integrating data from diverse sources, and optimizing performance in a fast-growing AI Lab. The ideal candidate will have a strong background in computer science or engineering and be proficient in programming, particularly Python, Java, or Scala. The position promises a steep learning curve and valuable research experience in the field of artificial intelligence.

Qualifications

  • Proficiency in programming (Python/Java/Scala) is crucial.
  • Strong SQL experience with various database technologies.
  • Familiarity with data warehousing solutions is a plus.

Responsibilities

  • Design, develop, and maintain ETL processes for data ingestion.
  • Integrate data from various sources ensuring quality and security.
  • Design scalable architectures using cloud platforms.

Skills

Python
Java
Scala
SQL
Data Integration
Data Governance
Data Architecture
Data Analysis

Education

Bachelor’s or Master’s degree in Computer Science/Engineering/Math/Physics

Tools

Hadoop
Spark
Kafka
AWS
Google Cloud
Azure

Job description

Job Description

Our client a London based Technology and Data Engineering leader have an opportunity in a high growth AI Lab for an ‘AI Engineering Researcher'

A UK based 'Enterprise' Artificial Intelligence organisation, focussing on helping accelerate their clients journey towards becoming 'AI-Optimal' - starting with significantly enhancing its abilities in leveraging AI & machine intelligence to outperform traditional competition.

The firm builds upon its rapidly expanding research team of exceptional PhD computer scientists, software engineers, mathematicians & physicists, to use a unique multi-disciplinary approach to solving enterprise-AI problems.

Principal Activities of role: Data Pipeline Development:

• Design, develop, and maintain ETL processes to efficiently ingest data from various sources into data warehouses or data lakes.

• Data Integration and Management: Integrate data from disparate sources, ensuring data quality, consistency, and security across systems. Implement data governance practices and manage metadata.

• System Architecture: Design robust, scalable, and high-performance data architectures using cloud-based platforms (e.g., AWS, Google Cloud, Azure).

• Performance Optimization: Monitor, troubleshoot, and optimize data processing workflows to improve performance and reduce latency. Typical background:

− Bachelor’s or Master’s degree in computer science/engineering/Math/Physics, plus one or more of the following:

− Proficiency in programming such as Python, Java, or Scala.

− Strong experience with SQL and database technologies (incl. various Vector Stores and more traditional technologies e.g. MySQL, PostgreSQL, NoSQL databases).

− Hands-on experience with data tools and frameworks such as Hadoop, Spark, or Kafka - advantage

− Familiarity with data warehousing solutions and cloud data platforms.

− Background in building applications wrapped around AI/LLM/mathematical models

− Ability to scale up algorithms to production

Key Proposition: - This role offers the opportunity to be part of creating world-class engineered solutions within Artificial Intelligence / Machine Learning, with a steep learning curve and an unmatched research experience.

Time Commitments: 100% (average 40 hours per week)

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.