Enable job alerts via email!
Generate a tailored resume in minutes
Land an interview and earn more. Learn more
A leading technology and data engineering firm in London is seeking an AI Engineering Researcher. This role involves designing and developing ETL processes, integrating data from diverse sources, and optimizing performance in a fast-growing AI Lab. The ideal candidate will have a strong background in computer science or engineering and be proficient in programming, particularly Python, Java, or Scala. The position promises a steep learning curve and valuable research experience in the field of artificial intelligence.
Job Description
Our client a London based Technology and Data Engineering leader have an opportunity in a high growth AI Lab for an ‘AI Engineering Researcher'
A UK based 'Enterprise' Artificial Intelligence organisation, focussing on helping accelerate their clients journey towards becoming 'AI-Optimal' - starting with significantly enhancing its abilities in leveraging AI & machine intelligence to outperform traditional competition.
The firm builds upon its rapidly expanding research team of exceptional PhD computer scientists, software engineers, mathematicians & physicists, to use a unique multi-disciplinary approach to solving enterprise-AI problems.
Principal Activities of role: Data Pipeline Development:
• Design, develop, and maintain ETL processes to efficiently ingest data from various sources into data warehouses or data lakes.
• Data Integration and Management: Integrate data from disparate sources, ensuring data quality, consistency, and security across systems. Implement data governance practices and manage metadata.
• System Architecture: Design robust, scalable, and high-performance data architectures using cloud-based platforms (e.g., AWS, Google Cloud, Azure).
• Performance Optimization: Monitor, troubleshoot, and optimize data processing workflows to improve performance and reduce latency. Typical background:
− Bachelor’s or Master’s degree in computer science/engineering/Math/Physics, plus one or more of the following:
− Proficiency in programming such as Python, Java, or Scala.
− Strong experience with SQL and database technologies (incl. various Vector Stores and more traditional technologies e.g. MySQL, PostgreSQL, NoSQL databases).
− Hands-on experience with data tools and frameworks such as Hadoop, Spark, or Kafka - advantage
− Familiarity with data warehousing solutions and cloud data platforms.
− Background in building applications wrapped around AI/LLM/mathematical models
− Ability to scale up algorithms to production
Key Proposition: - This role offers the opportunity to be part of creating world-class engineered solutions within Artificial Intelligence / Machine Learning, with a steep learning curve and an unmatched research experience.
Time Commitments: 100% (average 40 hours per week)