Enable job alerts via email!
A leading healthcare technology company in Pune is hiring a Data Operations Engineer. This role involves developing and maintaining data pipelines, managing CI/CD workflows, and collaborating with teams on infrastructure requirements. The ideal candidate has 4+ years of experience in DevOps/CloudOps with strong AWS knowledge and scripting skills. A competitive salary and flexible benefits package are offered.
At Medtronic you can begin a life-long career of exploration and innovation, while helping champion healthcare access and equity for all. You’ll lead with purpose, breaking down barriers to innovation in a more connected, compassionate world.
Our Global Diabetes Capability Center in Pune is expanding to serve more people living with diabetes globally. Our state-of-the-art facility is dedicated to transforming diabetes management through innovative solutions and technologies that reduce the burden of living with diabetes. We’re a mission-driven leader in medical technology and solutions with a legacy of integrity and innovation. Join our new Minimed India Hub as Data Operations Engineer. This role requires a strong background in DevOps, DataOps, or Cloud Engineering practices, with extensive experience automating CI/CD pipelines and modern data stack technologies.
This role offers a dynamic opportunity to join Medtronic's Diabetes business. Medtronic has announced its intention to separate the Diabetes division to promote future growth and innovation within the business and reallocate investments and resources across Medtronic. Upon establishment of SpinCo or the transition of the Diabetes business to another company, your employment may transfer to either SpinCo or the other company, at Medtronic's discretion and subject to applicable information and consultation requirements in your jurisdiction.
Develop and maintain robust, scalable data pipelines and infrastructure automation workflows using GitHub, AWS, and Databricks.
Implement and manage CI/CD pipelines using GitHub Actions and GitLab CI/CD for automated infrastructure deployment, testing, and validation.
Deploy and manage Databricks LLM Runtime or custom Hugging Face models within Databricks notebooks and model serving endpoints.
Manage and optimize Cloud Infrastructure costs, usage, and performance through tagging policies, right-sizing EC2 instances, storage tiering strategies, and auto-scaling.
Set up infrastructure observability and performance dashboards using AWS CloudWatch for real-time insights into cloud resources and data pipelines.
Develop and manage Terraform or CloudFormation modules to automate infrastructure provisioning across AWS accounts and environments.
Implement and enforce cloud security policies, IAM roles, encryption mechanisms (KMS), and compliance configurations.
Administer Databricks Workspaces, clusters, access controls, and integrations with Cloud Storage and identity providers.
Enforce DevSecOps practices for infrastructure-as-code, ensuring all changes are peer-reviewed, tested, and compliant with internal security policies.
Coordinate cloud software releases, patching schedules, and vulnerability remediations using Systems Manager Patch Manage.
Automate AWS housekeeping and operational tasks such as:
Cleanup of unused EBS Volumes, snapshots, old AMIs
Rotation of secrets and credentials using Secrets Manager
Log retention enforcement using S3 Lifecycle policies and CloudWatch Log groups
Perform incident response, disaster recovery planning, and post-mortem analysis for operational outages.
Collaborate with cross-functional teams including Data Scientists, Data Engineers, and other stakeholders to gather and implement infrastructure and data requirements.
4+ years of experience in DataOps / CloudOps / DevOps roles, with strong focus on infrastructure automation, data pipeline operations, observability, and cloud administration.
Strong proficiency in at least one scripting language (e.g., Python, Bash) and one infrastructure-as-code tool (e.g., Terraform, CloudFormation) for building automation scripts for AWS resource cleanup, tagging enforcement, monitoring and backups.
Hands-on experience integrating and operationalizing LLMs in production pipelines, including prompt management, caching, token-tracking, and post-processing.
Deep hands-on experience with AWS Services, including Core: EC2, S3, RDS, CloudWatch, IAM, Lambda, VPC; Data Services: Athena, Glue, MSK, Redshift; Security: KMS, IAM, Config, CloudTrail, Secrets Manager; Operational: Auto Scaling, Systems Manager, CloudFormation/Terraform; Machine Learning/AI: Bedrock, SageMaker, OpenSearch serverless; Working knowledge of Databricks, including cluster/workspace management and integration with AWS Storage and identity (IAM passthrough); Experience deploying and managing CI/CD workflows using GitHub Actions, GitLab CI, or AWS CodePipeline; Strong understanding of cloud networking (VPC Peering, Transit Gateway, security groups, private link); Familiarity with container orchestration platforms (Kubernetes, ECS); Strong understanding of data modeling, data warehousing concepts, and AI/ML lifecycle management; Knowledge of cost optimization strategies across compute, storage, and network layers; Experience with data governance, logging, and compliance practices in cloud environments (SOC2, HIPAA, GDPR); Bonus: LangChain, Prompt Engineering, Retrieval Augmented Generation (RAG), and vector database integration (AWS OpenSearch, Pinecone, Milvus).
AWS Certified Solutions Architect, DevOps Engineer or SysOps Administrator certifications.
Hands-on experience with multi-cloud environments, particularly Azure or GCP, in addition to AWS.
Experience with infrastructure cost management tools like AWS Cost Explorer or FinOps dashboards.
Ability to write clean, production-grade Python code for automation scripts, operational tooling, and custom CloudOps utilities.
Prior experience in supporting high-availability production environments with disaster recovery and failover architectures.
Understanding of Zero Trust architecture and security best practices in cloud-native environments.
Experience with automated cloud resources cleanup, tagging enforcement, and compliance-as-code using tools like Terraform Sentinel.
Familiarity with Databricks Unity Catalog, access control frameworks, and workspace governance.
Strong communication skills and experience working in agile cross-functional teams, ideally with Data Product or Platform Engineering teams.
The above statements are intended to describe the general nature and level of work being performed by employees assigned to this position, but they are not an exhaustive list of all the required responsibilities and skills of this position.
Medtronic offers a competitive Salary and flexible Benefits Package. A commitment to our employees lives at the core of our values. We recognize their contributions. They share in the success they help to create. We offer a wide range of benefits, resources, and competitive compensation plans designed to support you at every career and life stage.
We lead global healthcare technology and boldly attack the most challenging health problems facing humanity by searching out and finding solutions. Our Mission — to alleviate pain, restore health, and extend life — unites a global team of 95,000+ passionate people. We are engineers at heart— putting ambitious ideas to work to generate real solutions for real people. From the R&D lab, to the factory floor, to the conference room, every one of us experiments, creates, builds, improves and solves. We have the talent, diverse perspectives, and guts to engineer the extraordinary.
Learn more about our business, mission, and our commitment to diversity here