Job Search and Career Advice Platform

Enable job alerts via email!

Data Scientist

trg.recruitment

Remote

GBP 70,000 - 100,000

Full time

2 days ago
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A leading recruitment firm is seeking a Data Scientist to develop ML and LLM features in a fully remote setting. The ideal candidate will have expertise in Python and experience with various ML frameworks, such as scikit-learn and PyTorch. Responsibilities include researching LLM use cases, building, and tuning models, and creating data pipelines using Spark and Kafka. This role offers a competitive salary of up to £100k.

Qualifications

  • Hands-on Python for data science and ML.
  • Practical experience with LLM frameworks (e.g. LangChain).
  • Familiar with Spark/Kafka and end-to-end pipeline experience.
  • Experience with Databricks, MLflow, Docker, Kubernetes.
  • Clear communication with stakeholders.

Responsibilities

  • Research, prototype, and deploy LLM use cases.
  • Build and tune models using various frameworks.
  • Create data pipelines for batch and streaming.
  • Use Databricks and MLflow for experiments.

Skills

Python
scikit-learn
PyTorch
TensorFlow/Keras
LangChain
Pydantic
Spark
Kafka
Databricks
MLflow
Azure
Docker
Kubernetes
Job description

Data Scientist | Fully Remote | Health and Wellbeing

We’re hiring an Data Scientist to help build and ship ML and LLM features in a regulated environment with sensitive data. You’ll work with engineering to move models from idea to production.

Tech Stack: Python, scikit-learn, PyTorch, TensorFlow/Keras, LangChain, Pydantic, Spark, Kafka, Databricks, MLflow, Azure, Docker, Kubernetes.

Salary: Up to £100k

Working Environment: Fully Remote in UK

What you’ll do
  • Research, prototype, and deploy LLM use cases (Q&A, summarisation, document processing)
  • Build and tune models using scikit-learn, PyTorch, TensorFlow/Keras, and XGBoost
  • Create data pipelines for batch and streaming with Spark and Kafka
  • Use Databricks and MLflow for experiments, deployment, and monitoring
What you’ll bring
  • Hands‑on Python for data science and ML
  • Practical experience with LLM frameworks (e.g. LangChain)
  • Familiar with Spark/Kafka and End‑to‑end pipeline experience
  • Databricks, MLflow, Docker, Kubernetes
  • Clear communication with technical and non‑technical stakeholders

If you are interested apply below or email me directly mmatysik@trg-uk.com

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.