
Ativa os alertas de emprego por e-mail!
Cria um currículo personalizado em poucos minutos
Consegue uma entrevista e ganha mais. Sabe mais
A leading digital financial services provider in São Paulo is seeking an AI Risk Management Lead to drive AI Risk Strategy & Governance. You will enhance cross-coordination between risk teams and promote responsible AI practices while ensuring regulatory compliance. The ideal candidate has a strong background in risk management, excellent project management skills, and proficiency in English. This role offers various benefits, including a chance to earn equity and comprehensive health coverage.
Nubank is one of the largest digital financial services platforms in the world, empowering millions of customers across Latin America to take control of their financial lives.
We’re driven by an "AI-First" vision, leveraging cutting‑edge technology to redefine financial services and deliver exceptional experiences.
Our commitment to responsible AI is at the core of this vision, ensuring that innovation is balanced with robust risk management.
This is an opportunity to lead accountability for AI Governance, being at the forefront of innovation, and expanding the reach of AI Risk Management within an "AI-First" organization that is deeply committed to embedding responsible AI practices.
As an AI Risk Management Lead, you will :In close partnership with the AI Governance Working Group, drive the implementation and evolution of Nubank’s Global AI Policy, ensuring an interdisciplinary approach to AI risks and integrating existing risk governances.
Act as a central point for AI risk management, fostering seamless collaboration and communication between risk and business teams across Model, Data, Privacy, Information Security and IT Risks, as well as Platform, Engineering and Model Development teams.
Ensure AI risks are appropriately managed within Nubank’s Enterprise Risk Management framework, including defining Nubank’s classification of AI systems following a risk‑based approach.
Collaborate with leadership to define the organization’s AI risk appetite and monitor adherence to established thresholds.
Partner with various teams, leveraging existing risk assessment flows, to proactively identify, assess, and manage existing and emerging risks from AI across Third‑Party Tools, Decision Making and Customer Facing Models, and Internal AI Productivity Agents.
Diagnose processes gaps and propose specific improvements for AI adoption, focusing on areas such as experimentation flows, and AI Systems lifecycle governance.
Partner with Model Risk and Data Science teams to establish quality standards for AI models, such as foundation models and customer‑facing models based on LLM and GenAI, enhancing explainability efforts, and contributing to the development of a comprehensive Responsible AI Framework.
Keep up to date with industry best practices, new trends and legal & regulatory requirements, proposing necessary updates to the AI Risk Management framework and best practices guidelines.
Contribute to the design and implementation of AI literacy programs to foster critical understanding and responsible data handling.
Track AI usage and risks, developing standardized metrics and leadership reporting to ensure comprehensive risk coverage and regulatory adherence.
Ensure there are effective incident response processes in place, including clear contingency plans for AI‑related incidents.