
Enable job alerts via email!
Generate a tailored resume in minutes
Land an interview and earn more. Learn more
A leading AI startup is seeking a Senior ML Systems/ML DevOps Engineer to manage the infrastructure for machine learning training and inference workloads. This role involves operating GPU-heavy clusters, designing and automating the ML platform, and solving complex problems across various cloud providers. Candidates should have extensive experience in DevOps or Platform roles and a strong background in Linux and cloud infrastructure. The position offers remote work options and is available to candidates in the EU and North America.
Pathway is shaking the foundations of artificial intelligence by introducing the world’s first post-transformer model that adapts and thinks just like humans.
Pathway’s breakthrough architecture (BDH) outperforms Transformer and provides the enterprise with full visibility into how the model works. Combining the foundational model with the fastest data processing engine on the market, Pathway enables enterprises to move beyond incremental optimization and toward truly contextualized, experience-driven intelligence. The company is trusted by organizations such as NATO, La Poste, and Formula 1 racing teams.
Pathway is led by co‑founder & CEO Zuzanna Stamirowska, a complexity scientist who created a team consisting of AI pioneers, including CTO Jan Chorowski who was the first person to apply Attention to speech and worked with Nobel laureate Goeff Hinton at Google Brain, as well as CSO Adrian Kosowski, a leading computer scientist and quantum physicist who obtained his PhD at the age of 20.
The company is backed by leading investors and advisors, including TQ Ventures and Lukasz Kaiser, co‑author of the Transformer (“the T” in ChatGPT) and a key researcher behind OpenAI’s reasoning models. Pathway is headquartered in Palo Alto, California.
We are looking for a Senior ML Systems / ML DevOps Engineer who loves Linux, distributed systems, and scaling GPU clusters more than fiddling with notebooks. You will own the infrastructure that powers our ML training and inference workloads across multiple cloud providers, from bare‑bones Linux to container orchestration and CI/CD.
You will sit close to the R&D team, but your home is production infrastructure: clusters, networks, storage, observability, and automation. Your work will directly determine how fast we can train, ship, and iterate on models.