Job Search and Career Advice Platform

¡Activa las notificaciones laborales por email!

MLOps Engineer / AI Infrastructure Specialist OC-16B

Oceans Code Experts

Ciudad de México

Híbrido

MXN 1,454,000 - 2,182,000

Jornada completa

Hoy
Sé de los primeros/as/es en solicitar esta vacante

Genera un currículum adaptado en cuestión de minutos

Consigue la entrevista y gana más. Más información

Descripción de la vacante

A technology consulting firm is seeking a seasoned MLOps Engineer / AI Infrastructure Specialist to drive AI/ML deployment and scalability. The role involves collaborating with data scientists, managing MLOps pipelines, and leveraging cloud platforms like AWS SageMaker. This full-time position offers a flexible remote work policy, catering to various schedules, while requiring great English proficiency and over 8 years of relevant experience. Ideal candidates will excel in Python and container orchestration with Docker and Kubernetes.

Formación

  • 8+ years of experience as a MLOps Engineer / AI Infrastructure Specialist.
  • Proficient in Python and deploying ML models.
  • Deep experience with Docker and Kubernetes.
  • Strong background in CI/CD pipeline management.
  • Experience with cloud-based ML platforms.

Responsabilidades

  • Design and maintain scalable MLOps pipelines.
  • Automate workflows with CI/CD tools.
  • Manage containerized environments using Docker.
  • Collaborate with teams to operationalize ML models.
  • Monitor and manage models using cloud platforms.

Conocimientos

Python
CI/CD pipelines
Docker
Kubernetes
English proficiency (B2+)

Herramientas

TensorFlow
PyTorch
AWS SageMaker
Azure ML
Vertex AI
Descripción del empleo
About the job MLOps Engineer / AI Infrastructure Specialist OC-16B

Locations available: All LatAm

Oceans Code Experts is looking for talented individuals that are ready for the next step in their career, we offer a collaborative professional environment as full of rewarding experiences as it is of challenges.

A MLOps Engineer / AI Infrastructure Specialist at Oceans can expect to work on multiple projects, work with a cross‑functional team, and be transparent about time and tasks to help clients understand the progress of their projects.

Candidates must LOVE helping people, solving business problems, and pushing themselves to slay the next beast of a project.

Job Summary We’re looking for a seasoned MLOps Engineer / AI Infrastructure Specialist to drive the deployment, scalability, and automation of AI/ML pipelines. If you're passionate about building robust machine learning infrastructure and working at the intersection of AI and DevOps, this is your opportunity to make an impact.

Job Responsibilities

  • Design, implement, and maintain scalable MLOps pipelines for model training, evaluation.
  • Automate workflows using CI/CD tools such as GitLab, Jenkins, or GitHub Actions.
  • Manage and optimize containerized environments using Docker and orchestrate deployments with Kubernetes.
  • Collaborate with data scientists and engineers to streamline experimentation and operationalize ML models.
  • Deploy, monitor, and manage models using cloud platforms like AWS SageMaker, Azure ML, or Vertex AI.
  • Ensure infrastructure reliability and performance, including logging, versioning, and automated rollback.
  • Maintain punctuality and consistency in remote work environments, particularly for meetings and team coordination.

Job Requirements

  • Great English proficiency (B2+ Written and spoken)
  • 8+ years of experience as a MLOps Engineer / AI Infrastructure Specialist
  • Impeccable punctuality (schedules are flexible but being in time for meetings is crucial)
  • Proficient in Python and experienced in deploying ML models using TensorFlow and/or PyTorch.
  • Deep hands‑on experience with containerization (Docker) and orchestration (Kubernetes).
  • Strong background in implementing and maintaining CI/CD pipelines.
  • Proven experience working with cloud‑based ML platforms like AWS SageMaker, Azure ML, or Vertex AI.

Nice to have

  • Experience with workflow orchestration tools like Kubeflow or Airflow, and data platforms like Databricks.
  • Monitoring and infrastructure as code tools such as Prometheus, Grafana, and Terraform.
  • Familiarity with data versioning tools like DVC or LakeFS.

Position Type and Expected Hours of Work This is a full‑time consultancy, with up to 40 weekly hours during regular business times. We operate under a flexible core hours policy to accommodate various schedules, allowing consultants to perform during their peak productivity times. Additionally, we offer the flexibility to work remotely.

Consigue la evaluación confidencial y gratuita de tu currículum.
o arrastra un archivo en formato PDF, DOC, DOCX, ODT o PAGES de hasta 5 MB.