
Ativa os alertas de emprego por e-mail!
Cria um currículo personalizado em poucos minutos
Consegue uma entrevista e ganha mais. Sabe mais
A global agribusiness leader is looking for a Principal Machine Learning Engineer based in São Paulo, Brazil. In this role, you will collaborate closely with data scientists to deploy machine learning solutions and build scalable infrastructures on Google Cloud Platform. Ideal candidates will have over 5 years of DevOps experience and a strong foundation in software design and cloud services. The position offers a hybrid work environment and a strong benefits package.
City: Sao Paulo State: São Paulo (BR-SP) Country: Brazil (BR) Requisition Number: 43238
Bunge has an exciting opportunity available for Principal Machine Learning Engineer. In this role you will be part of a global team working on challenging, meaningful projects impacting core business activities. Since 1818, Bunge has been connecting farmers to consumers to deliver essential food, feed, and fuel to the world. Looking to the future, our ambition is to continuously reinvent ourselves, leveraging data to be at the forefront of analytics, technology and talent to accomplish our purpose in a better, faster and simpler way. Bunge is committed to operating and thriving in the digital world – creating world class agile teams where teammates are empowered and encouraged to collaborate and test and learn to succeed.
At Bunge, people don’t just come here to work, they come here to grow – solving challenges that directly impact the world with a diverse team of thinkers and doers. Bunge offers a strong compensation and benefits package, generous paid time off program, flexible work arrangements, and opportunity to progress. Our hybrid work environment provides a balance of in-office and remote work.
Most importantly, in all we do we live our values:
We are seeking GCP Machine Learning Engineers (MLE). With strong experience on deploy machine learning solutions into production by utilizing state of the art tools/algorithms and methodologies following DevOps and a test-driven development process. MLE work in close collaborations with the data scientists and data engineer guiding them to focus not only on model and pipeline performance but also delivery stability, reproducibility, and scalability of a software product.
Bachelor’s degree, or combination of years of experience and education, in computer science, Information Technology, or a related field (or equivalent work experience).
At least 5 years proven, hands‑on DevOps engineer experience with major public cloud services, with preference to GCP services, including but not limited to Compute Engine, GKE, Big Query, Cloud Run and Cloud Composer. Bonus points, if you’ve obtained Google Cloud Professional certifications (like Architect, Data Engineer, or DevOps).
Plays a significant role in developing other juniors, act as a technical consultant to product managers, and valued as an asset to any data science projects. Domain expert and functional advisor for business stakeholders.
Ability to apply software development best practice into machine learning project, including unit test, DevOps integration, release management, test driven development, etc. Ability to automate development process of machine learning project by leverage state of art tools and technology like container, continuous integration and delivery, orchestration tools.
Familiar with at least one of major cloud solutions, preferably GCP. Able to recommend and select right services available in cloud to address technical problem.
Good understanding and experience with software design principle and design pattern. Ability to transform proof of concept machine learning models into scalable solutions.
Good understanding of Agile and scrum methodology, keep team focus on deliver business value.
Self‑motivated with strong problem‑solving and learning skills.
Flexibility to changes in work direction as the project develops.
Believes in a non‑hierarchical culture of collaboration, transparency, safety, and trust. Work with a focus on value creation, growth and serving customers with full ownership and accountability.
Demonstrable Terraform experience.
Have implemented Kubernetes and Docker (or similar container engine) solutions. Have built large‑scale monitoring solutions with Google Cloud Monitoring or other tools (e.g., Datadog, Sentry, Prometheus, Grafana, SolarWinds).
Knowledge of Python and any other scripting language (e.g., Bash). If you have experience building Data Platform products and with big data technologies like Spark, it is a plus.
Fluency in English.
At Bunge (NYSE: BG), our purpose is to connect farmers to consumers to deliver essential food, feed and fuel to the world. As a premier agribusiness solutions provider, our team of ~37,000 dedicated employees partner with farmers across the globe to move agricultural commodities from where they’re grown to where they’re needed—in faster, smarter, and more efficient ways. We are a world leader in grain origination, storage, distribution, oilseed processing and refining, offering a broad portfolio of plant-based oils, fats, and proteins. We work alongside our customers at both ends of the value chain to deliver quality products and develop tailored, innovative solutions that address evolving consumer needs. With 200+ years of experience and presence in over 50 countries, we are committed to strengthening global food security, advancing sustainability, and helping communities prosper where we operate. Bunge has its registered office in Geneva, Switzerland and its corporate headquarters in St. Louis, Missouri. Learn more at Bunge.com.
If this sounds like you, join us! We value and invest in people who believe in our purpose and are excited to live it every day – people who are #ProudtoBeBunge