Enable job alerts via email!

Software Engineer, Research Infrastructure

OpenAI

San Francisco (CA)

Hybrid

USD 325,000 - 590,000

Full time

11 days ago

Boost your interview chances

Create a job specific, tailored resume for higher success rate.

Job summary

An innovative company is seeking an engineer to support its extensive GPU fleet infrastructure. This role involves designing, implementing, and operating systems for model deployment and training. You'll collaborate with researchers and product teams to understand workload requirements, ensuring high reliability and utilization of services. With a hybrid work model and a commitment to advancing AI responsibly, this is a unique opportunity to shape critical systems in a fast-paced environment. If you're passionate about technology and eager to make an impact, this role is perfect for you.

Benefits

Relocation Assistance
Hybrid Work Model
Equal Opportunity Employer
Commitment to Diversity

Qualifications

  • Experience with hyperscale compute systems and strong programming skills.
  • Familiarity with public cloud environments, especially Azure.

Responsibilities

  • Design and operate components of the compute fleet including job scheduling.
  • Collaborate with teams to ensure high utilization and reliability.

Skills

Hyperscale Compute Systems
Programming Skills
Public Cloud (Azure)
Kubernetes
AI/ML Workloads

Job description

This role will support the fleet infrastructure team at OpenAI. The fleet team focuses on running the world's largest, most reliable, and frictionless GPU fleet to support OpenAI's general purpose model training and deployment. Work on this team ranges from

  • Maximizing GPUs doing useful work by building user-friendly scheduling and quota systems

  • Running a reliable and low maintenance platform by building push-button automation for kubernetes cluster provisioning and upgrades

  • Supporting research workflows with service frameworks and deployment systems

  • Ensuring fast model startup times though high performance snapshot delivery across blob storage down to hardware caching

  • Much more!

About the Role

As an engineer within Fleet infrastructure, you will design, write, deploy, and operate infrastructure systems for model deployment and training on one of the world's largest GPU fleet. The scale is immense, the timelines are tight, and the organization is moving fast; this is an opportunity to shape a critical system in support of OpenAI's mission to advance AI capabilities responsibly.

This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.

In this role, you will:

  • Design, implement and operate components of our compute fleet including job scheduling, cluster management, snapshot delivery, and CI/CD systems.

  • Interface with researchers and product teams to understand workload requirements

  • Collaborate with hardware, infrastructure, and business teams to provide a high utilization and high reliability service

You might thrive in this role if you:

  • Have experience with hyperscale compute systems

  • Possess strong programming skills

  • Have experience working in public clouds (especially Azure)

  • Have experience working in Kubernetes

  • Execution focused mentality paired with a rigorous focus on user requirements

  • As a bonus, have an understanding of AI/ML workloads

About OpenAI

OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.

We are an equal opportunity employer and do not discriminate on the basis of race, religion, national origin, gender, sexual orientation, age, veteran status, disability or any other legally protected status.

OpenAI Affirmative Action and Equal Employment Opportunity Policy Statement

For US Based Candidates: Pursuant to the San Francisco Fair Chance Ordinance, we will consider qualified applicants with arrest and conviction records.

We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.

OpenAI Global Applicant Privacy Policy

At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.

Compensation Range: $325K - $590K

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.

Similar jobs

Senior Cloud Platform Software Engineer

NVIDIA Corporation

Santa Clara

Remote

USD 224,000 - 426,000

6 days ago
Be an early applicant

Software Engineer - OpenStack

Canonical

San Francisco

Remote

USD 100,000 - 720,000

14 days ago

Architect-Cloud-CA or WA preferred

Juniper Networks, Inc

Sunnyvale

Remote

USD 284,000 - 409,000

9 days ago

Research Engineer, Tokens ML Infra

Anthropic

San Francisco

Hybrid

USD 315,000 - 425,000

5 days ago
Be an early applicant

Security Engineer, Cloud Security

OpenAI

San Francisco

Remote

USD 279,000 - 385,000

30+ days ago

Research Engineer, Frontier Evals

OpenAI

San Francisco

On-site

USD 200,000 - 370,000

2 days ago
Be an early applicant

Software Engineer - Production Engineering

Figma

San Francisco

Remote

USD 149,000 - 350,000

30+ days ago

Software Engineer - FigFile Platform

Figma

San Francisco

Remote

USD 149,000 - 350,000

30+ days ago

Software Engineer - Distributed Storage

Figma

San Francisco

Remote

USD 149,000 - 350,000

30+ days ago