Job Search and Career Advice Platform

Enable job alerts via email!

DevOps Engineer

Tarjama&

Riyad Al Khabra

On-site

SAR 200,000 - 300,000

Full time

Yesterday
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A leading technology company in Al-Qassim Province is seeking an experienced DevOps Engineer to manage the deployment and reliability of AI systems and LLM services. You will focus on building CI/CD pipelines, optimizing cloud-native infrastructure, and ensuring security compliance for production environments. The ideal candidate will have at least 5 years of experience in DevOps, strong skills in Docker and Kubernetes, and a solid understanding of AI workloads. This position offers competitive compensation and opportunities for growth.

Qualifications

  • Minimum 5 years of hands-on DevOps engineering experience in production environments.
  • Proven experience deploying and operating AI systems and LLM-based workloads in production.
  • Strong hands-on expertise with Docker, Kubernetes, CI / CD platforms, and cloud services.

Responsibilities

  • Design, build, and maintain CI / CD pipelines for AI models and software applications.
  • Deploy, operate, and scale AI systems and LLM APIs.
  • Monitor infrastructure performance and system health using observability tools.

Skills

DevOps engineering
Docker
Kubernetes
CI / CD platforms
Monitoring
Infrastructure as code
Network security
Cloud-native architecture
Job description

The DevOps Engineer will play a mission-critical role owning the deployment, scalability, security, and reliability of AI systems and digital platforms. This role has a strong focus on LLM deployments, AI workloads, and cloud-native infrastructure, ensuring that all AI and software systems operate with enterprise-grade availability, performance, and compliance.

Key Responsibilities

CI / CD & Automation Engineering
  • Design, build, and maintain CI / CD pipelines for AI models, LLM services, and software applications.
  • Automate build, test, deployment, and environment configuration workflows to enable rapid and reliable releases.
AI & LLM Deployment Operations
  • Deploy, operate, and scale AI systems, LLM APIs, inference workloads, and cloud-based AI services.
  • Ensure high availability, horizontal scalability, and low-latency inference across all production environments.
Infrastructure, Reliability & Cost Optimization
  • Monitor infrastructure performance, system health, and AI workloads using observability and monitoring tools.
  • Optimize infrastructure for reliability, performance, and cloud cost efficiency.
Security, Compliance & Governance
  • Implement and enforce security best practices, access controls, secrets management, and environment isolation.
  • Ensure infrastructure and deployment processes align with national data governance, compliance, and cybersecurity standards.
Cross-Functional Enablement
  • Collaborate closely with AI Engineers, Full-Stack Engineers, and Product teams to enable seamless, scalable deployments.
  • Act as the primary technical owner for production reliability during mission-critical deployments.
Documentation & Architecture Standards
  • Maintain comprehensive documentation for DevOps workflows, system architecture, environments, and deployment standards.
  • Ensure operational readiness, auditability, and knowledge transfer across teams.
Required Qualifications
  • Minimum 5 years of hands‑on DevOps engineering experience in production environments.
  • Mandatory: Proven experience deploying and operating AI systems and LLM-based workloads in production.
  • Strong hands‑on expertise with Docker, Kubernetes, CI / CD platforms, and cloud services.
  • Experience with monitoring, observability, logging, and infrastructure‑as‑code (e.g., Terraform, similar tools).
  • Strong understanding of networking, security, and cloud-native architecture principles.
  • Excellent troubleshooting and incident response capabilities in high-availability systems.
Preferred Qualifications
  • Experience with MLOps platforms such as MLflow, SageMaker, Vertex AI, or similar.
  • Proven experience scaling AI and LLM applications in high-traffic production environments.
  • Exposure to AI model lifecycle management, retraining pipelines, and operational governance.
  • Experience in government, regulated, or national‑scale enterprise environments.
KPIs & Deliverables
  • Uptime, reliability, and stability of AI platforms and production systems.
  • Deployment speed, automation maturity, and release reliability.
  • Infrastructure performance, scalability, and cost optimization efficiency.
  • Security posture and compliance readiness across all environments.
  • Quality, completeness, and audit readiness of DevOps documentation and workflows.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.