¡Activa las notificaciones laborales por email!

DevOps Engineer (Senior) ID38561

AgileEngine

Puebla de Zaragoza

Presencial

MXN 400,000 - 600,000

Jornada completa

Hace 12 días

Descripción de la vacante

AgileEngine, a leader in software development for Fortune 500 clients, seeks a Kubernetes Engineer with over 5 years' experience in AWS environments. This role focuses on robust Kubernetes operations, data pipeline automation, and cross-team collaboration. Join us for competitive pay, professional growth, and the chance to work on exciting projects while enjoying a flexible work-life balance.

Servicios

Professional growth opportunities
Competitive USD-based compensation
Flexible work schedule

Formación

  • 5+ years of experience managing Kubernetes clusters, preferably in AWS.
  • Strong hands-on skills with Argo Workflows and Terraform.
  • Upper-Intermediate English level required.

Responsabilidades

  • Design, deploy, and manage scalable Kubernetes environments.
  • Lead migration projects to GitLab and CI/CD automation.
  • Collaborate with cross-functional teams on cloud modernization.

Conocimientos

Kubernetes
AWS Cloud Services
Argo Workflows
Git
Terraform
SQL
Linux Systems Administration

Educación

Bachelor's degree in Computer Science, Engineering, or related field

Descripción del empleo

AgileEngine is an Inc. 5000 company that creates award-winning software for Fortune 500 brands and trailblazing startups across 17+ industries. We rank among the leaders in areas like application development and AI/ML, and our people-first culture has earned us multiple Best Place to Work awards.

If you're looking for a place to grow, make an impact, and work with people who care, we'd love to meet you!


WHAT YOU WILL DO

- Kubernetes Operations: Design, deploy, and operate scalable and robust Kubernetes environments (EKS or similar) supporting data and analytics workloads;
- Argo Workflows: Build, automate, and maintain complex data pipelines using Argo Workflows for orchestration, scheduling, and workflow automation;
- GitLab/Git Migration Projects: Lead or support migration of source code repositories and CI/CD pipelines to GitLab or other Git-based platforms. Automate and optimize testing, deployment, and delivery using GitOps principles;
- Infrastructure as Code: Develop and manage infrastructure with Terraform and related tools, implementing infrastructure automation and repeatable deployments in AWS and Kubernetes;
- Data Platform Reliability: Support high-availability S3-based data lake environments and associated data tooling, ensuring robust monitoring, scalability, and security;
- Observability: Instrument, monitor, and create actionable alerts and dashboards for Kubernetes clusters, Argo workflows, and data platforms to quickly surface and resolve operational issues;
- Incident & Problem Management: Participate in incident, problem, and change management processes, proactively drive improvements in reliability KPIs (MTTD/MTTR/availability);
- Collaboration: Work cross-functionally with Data Engineering, SRE, Product, and Business teams to deliver resilient solutions and support key initiatives like Git migration and cloud modernization;
- Security & Networking: Apply best practices in networking (Layer 4-7), firewalls, VPNs, IAM, and data encryption across the cloud/data stack;
- Capacity & Performance: Engage in capacity planning, forecasting, and performance tuning for large-scale cloud and Kubernetes-based workloads.

MUST HAVES

-Bachelor’s degree in Computer Science, Engineering, or related field, or equivalent experience;
- 5+ years of production experience operating and managing Kubernetes clusters (preferably in AWS, EKS, or similar environments);
- Strong hands-on experience with AWS cloud services;
- Deep hands-on experience with Argo Workflows, including developing, deploying, and troubleshooting complex pipelines;
- Experience with Git, GitLab, and CI/CD, including leading or supporting migration projects and the adoption of GitOps practices;
- Effective at developing infrastructure as code with Terraform and related automation tools;
- Practical experience in automating data workflows and orchestration in a cloud-native environment;
- Proficient in SQL and basic scripting (Python or similar);
- Sound understanding of networking (Layer 4-7), security, and IAM in cloud environments;
- Proficient in Linux-based systems administration (RedHat/CentOS/Ubuntu/Amazon Linux);
- Strong written and verbal communication skills;
- Ability to collaborate in cross-functional environments;
- Track record delivering reliable, secure, and scalable data platforms in rapidly changing environments;
- Experience working with S3-based data lakes or similar large, cloud-native data repositories;
- Upper-Intermediate English level.

NICE TO HAVES

- Exposure to regulated or healthcare environments;
- Familiarity with data modeling, analytics/BI platforms, or DBT;
- Experience leading software/tooling migrations (e.g., Bitbucket to GitLab), or managing large-scale CI/CD consolidations.

THE BENEFITS OF JOINING US

- Professional growth: Accelerate your professional journey with mentorship, TechTalks, and personalized growth roadmaps.

- Competitive compensation: We match your ever-growing skills, talent, and contributions with competitive USD-based compensation and budgets for education, fitness, and team activities.

- A selection of exciting projects: Join projects with modern solutions development and top-tier clients that include Fortune 500 enterprises and leading product brands.

- Flextime: Tailor your schedule for an optimal work-life balance, by having the options of working from home and going to the office – whatever makes you the happiest and most productive.


Your application doesn't end here! To unlock the next steps, check your email and complete your registration on our Applicant Site. The incomplete registration results in the termination of your process.


Consigue la evaluación confidencial y gratuita de tu currículum.
o arrastra un archivo en formato PDF, DOC, DOCX, ODT o PAGES de hasta 5 MB.