Activez les alertes d’offres d’emploi par e-mail !

Machine Learning DevOps - Cloud and Compute Cluster

Pathway

Paris

À distance

EUR 60 000 - 80 000

Plein temps

Aujourd’hui
Soyez parmi les premiers à postuler

Résumé du poste

A leading AI startup is seeking a Machine Learning DevOps professional to optimize infrastructure for ML training. The position involves automating ML pipelines, managing models, and ensuring CI/CD practices across the ML lifecycle. Candidates should have solid experience with Linux, containerization, and cloud services. This role offers a remote work possibility with exciting career prospects in a dynamic environment.

Qualifications

  • Very good familiarity with Linux, shell scripts, and cluster configuration scripts.
  • Proficiency in workload management and orchestration.
  • Solid grasp of CI/CD tools and workflows.

Responsabilités

  • Optimize infrastructure for ML training and inference.
  • Automate and maintain ML pipelines.
  • Manage model versioning and reproducibility.
  • Work with terabyte-large datasets.
  • Implement ML-centric CI/CD practices.

Connaissances

Linux
Containerization and orchestration (Slurm, Docker, Kubernetes)
CI/CD tools (GitHub Actions, Jenkins)
Cloud infrastructure knowledge (AWS, GCP, Azure)
Monitoring/logging tools (Grafana, CloudWatch)
Infrastructure as code (Terraform, CloudFormation)
ML pipeline orchestration tools (MLflow, Kubeflow)
Programming skills in Python
Description du poste
About Pathway

Pathway is shaking the foundations of artificial intelligence by introducing the world’s first post-transformer model that adapts and thinks just like humans.

Pathway’s breakthrough architecture (BDH) outperforms Transformer and provides the enterprise with full visibility into how the model works. Combining the foundational model with the fastest data processing engine on the market, Pathway enables enterprises to move beyond incremental optimization and toward truly contextualized, experience-driven intelligence. The company is trusted by organizations such as NATO, La Poste, and Formula 1 racing teams.

Pathway is led by co-founder & CEO Zuzanna Stamirowska, a complexity scientist who created a team consisting of AI pioneers, including CTO Jan Chorowski who was the first person to apply Attention to speech and worked with Nobel laureate Goeff Hinton at Google Brain, as well as CSO Adrian Kosowski, a leading computer scientist and quantum physicist who obtained his PhD at the age of 20.

The company is backed by leading investors and advisors, including TQ Ventures and Lukasz Kaiser, co-author of the Transformer (“the T” in ChatGPT) and a key researcher behind OpenAI’s reasoning models. Pathway is headquartered in Palo Alto, California.

The opportunity

We are currently searching for a Machine Learning DevOps with experience in cloud and compute cluster management and Linux administration.

Our development and production environment is in the cloud, using several major cloud providers. We need support in managing and automating the processes, and scaling the infrastructure to growing team and production needs.

You Will
  • Optimize infrastructure for ML training and inference (e.g., GPUs, distributed compute).
  • Automate and maintaining ML pipelines (data ingestion, training, validation, deployment).
  • Manage model versioning, reproducibility, and traceability.
  • Work with terabyte-large datasets.
  • Implement ML-centric CI/CD practices.
  • Monitor model performance and data drift in production.
  • Collaborate with machine learning engineers, software engineers, and platform teams.

The role focuses on operationalizing machine learning models, ensuring scalability, reliability, and automation across the ML lifecycle.

What We Are Looking For
  • Very good familiarity with Linux, shell scripts, and cluster configuration scripts as the basic work tool.
  • Proficiency in workload management, containerization and orchestration (Slurm, Docker, Kubernetes).
  • Solid grasp of CI/CD tools and workflows (GitHub Actions, Jenkins, Gitlab CI, etc.).
  • Cloud infrastructure knowledge (AWS, GCP, Azure) – especially in ML services (e.g., SageMaker Hyperpod, Vertex AI).
  • Familiarity with monitoring/logging tools (Grafana, CloudWatch, Prometheus, Loki).
  • Experience with infrastructure as code (Terraform, CloudFormation, cluster-toolkit).
  • Experience with ML pipeline orchestration tools (e.g., MLflow, Kubeflow, Airflow, Metaflow).
  • Programming skills in Python (with exposure to ML libraries like TensorFlow, PyTorch).
  • Willingness to learn.
Why You Should Apply
  • Intellectually stimulating work environment. Be a pioneer: you get to work with realtime data processing & AI.
  • Work in one of the hottest AI startups, with exciting career prospects. Team members are distributed across the world.
  • Responsibilities and ability to make significant contribution to the company’ success
  • Inclusive workplace culture
Further details
  • Type of contract: Permanent employment contract
  • Preferable joining date: Immediate.
  • Compensation: based on profile and location.
  • Location: Remote work. Possibility to work or meet with other team members in one of our offices: Palo Alto, CA; Paris, France or Wroclaw, Poland. Candidates based anywhere in the EU, United States, and Canada will be considered.
Obtenez votre examen gratuit et confidentiel de votre CV.
ou faites glisser et déposez un fichier PDF, DOC, DOCX, ODT ou PAGES jusqu’à 5 Mo.