Enable job alerts via email!

Site Reliability Engineer - AI Cloud

Support Revolution

San Jose (CA)

On-site

USD 145,000 - 165,000

Full time

4 days ago
Be an early applicant

Boost your interview chances

Create a job specific, tailored resume for higher success rate.

Job summary

A leading tech company seeks a Cloud Reliability Engineer to enhance scalability and security across AI cloud platforms. The role demands expertise in cloud infrastructure, automation, and incident management while fostering cross-team collaboration and optimizing performance across GPU-accelerated computing clusters.

Qualifications

  • 8 years of experience in relevant areas.
  • Proficiency in scripting and coding with Bash, Python, or Go.
  • Hands-on experience with GPU compute clusters.

Responsibilities

  • Design and provision cloud infrastructure using Infrastructure as Code.
  • Implement observability tools to monitor system health.
  • Lead root cause analysis and resolution for system outages.

Skills

Linux
Containerization
Scripting
Communication
Networking Protocols

Education

Bachelor’s degree in Computer Science, Engineering, or a related field

Tools

Terraform
Ansible
Prometheus
Grafana
ELK
Kubernetes

Job description

Select how often (in days) to receive an alert: Create Alert

Location: San Jose, California, United States

About Supermicro:

Supermicro is a Top Tier provider of advanced server, storage, and networking solutions for Data Center, Cloud Computing, Enterprise IT, Hadoop/ Big Data, Hyperscale, HPC and IoT/Embedded customers worldwide. We are the #5 fastest growing company among the Silicon Valley Top 50 technology firms. Our unprecedented global expansion has provided us with the opportunity to offer a large number of new positions to the technology community. We seek talented, passionate, and committed engineers, technologists, and business leaders to join us.

Job Summary:

As a Cloud Reliability Engineer for our Linux-based AI cloud platforms, you will help us deploy, scale, and ensure high availability, performance, scalability, and security across GPU-accelerated compute clusters, Kubernetes workloads, and supporting storage/network infrastructure. You’ll bridge Dev and Ops by automating infrastructure deployment, enhancing observability, and applying SRE best practices to support reliable AI and MLOps environments.

Essential Duties and Responsibilities:

Includes the following essential duties and responsibilities (other duties may also be assigned):

  • Cloud Infra Automation:Design and provision cloud infrastructure using Infrastructure as Code (Terraform, Ansible, or Helm) on bare metal or cloud platforms. Develop custom automation and tooling in Python or Go to extend deployment workflows and streamline operations.
  • Platform Reliability:Deploy, scale, maintain, and optimize uptime for AI cloud services including GPU clusters, Kubernetes (K8s), and storage systems (e.g., Ceph, BeeGFS, or Weka). Understand the tools required to benchmark and assure consistent application performance.
  • Monitoring & Alerting:Implement observability tools (e.g., Prometheus, Grafana, ELK, Loki, Fluentd) to monitor system health and alert on anomalies or performance degradation.
  • Capacity Planning:Analyze usage trends and forecast infrastructure needs to support AI workloads and large-scale model training/inference.
  • Incident Management:Lead root cause analysis and resolution for system outages or degraded performance. Define and maintain service level objectives (SLOs), indicators (SLIs), and agreements (SLAs) aligned with uptime and performance goals.
  • CI/CD Integration:Collaborate with DevOps and MLOps teams to ensure reliable delivery pipelines using GitLab CI/CD, ArgoCD, or similar tools.
  • Security & Compliance:Harden Linux systems, manage TLS certificates, and enforce secure access controls via Role-Based Access Control (RBAC), LDAP-integrated SSO, TLS, and network segmentation policies.
  • Documentation & Playbooks:Maintain clear, version-controlled documentation, including architecture diagrams, runbooks, and incident response playbooks to support cross-team knowledge transfer and rapid onboarding.
Qualifications:
  • Bachelor’s degree in Computer Science, Engineering, or a related field—or equivalent experience and 8 years of experience in the areas below
  • Proficiency in Linux (Ubuntu, RHEL/CentOS), containers (Docker, Podman), and orchestration (Kubernetes).
  • Experience managing GPU compute clusters (NVIDIA / CUDA, AMD / ROCm)
  • Hands-on experience with observability tools (Prometheus, Grafana, Loki, ELK, etc.).
  • Strong scripting and coding skills (Bash, Python, or Go).
  • Exposure to secure multi-tenant environments and zero trust architectures.
  • Familiarity with network protocols, DNS, DHCP, BGP, ROCEv2, and InfiniBand or high-throughput Ethernet fabrics.
  • Excellent collaboration and communication skills for cross-team, partner, and customer initiatives

Preferred Qualifications:

  • Understanding of AI/ML reference architectures and experience with workflows, MLFlow, or Kubeflow.
  • Familiarity with storage backends optimized for AI (CephFS, BeeGFS, WekaFS).
  • Prior experience in bare-metal provisioning via PXE, Ironic, or Foreman.
  • Understanding of NVIDIA GPU telemetry and NCCL testing for performance benchmarking.
  • Familiarity with ITIL processes or structured change management in production systems is a plus.
  • Certifications: CKA, CKAD, Linux+, or related credentials.
Salary Range

$145,000 - $165,000

The salary offered will depend on several factors, including your location, level, education, training, specific skills, years of experience, and comparison to other employees already in this role. In addition to a comprehensive benefits package, candidates may be eligible for other forms of compensation, such as participation in bonus and equity award programs.

EEO Statement

Supermicro is an Equal Opportunity Employer and embraces diversity in our employee population. It is the policy of Supermicro to provide equal opportunity to all qualified applicants and employees without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, protected veteran status or special disabled veteran, marital status, pregnancy, genetic information, or any other legally protected status.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.

Similar jobs

Site Reliability Engineer - AI Cloud

Super Micro Computer Spain, S.L.

San Jose

On-site

USD 145.000 - 165.000

4 days ago
Be an early applicant

Site Reliability Engineer - AI Cloud

Supermicro

San Jose

On-site

USD 145.000 - 165.000

6 days ago
Be an early applicant

Site Reliability Engineer - AI Cloud

Support Revolution

San Jose

On-site

USD 145.000 - 165.000

7 days ago
Be an early applicant

Senior Site Reliability Engineer (AWS, AI/ML, & APM)

Davita Inc.

Remote

USD 120.000 - 160.000

3 days ago
Be an early applicant

AI Platform Engineering Manager

Lockheed Martin

Fort Worth

Remote

USD 150.000 - 266.000

3 days ago
Be an early applicant

Remote Senior Site Reliability Engineer (SRE) - Zetachain

Blockchain Works

San Francisco

Remote

USD 120.000 - 160.000

4 days ago
Be an early applicant

Site Reliability Engineer, AI/ML Platforms

Adobe Systems GmbH

San Jose

On-site

USD 133.000 - 242.000

24 days ago

Staff Engineer - Partner Platform (REMOTE)

GEICO

San Jose

Remote

USD 115.000 - 230.000

3 days ago
Be an early applicant

Jira Developer New Remote; San Jose, California, United States

Archer Aviation

San Jose

Remote

USD 122.000 - 154.000

Today
Be an early applicant