Overview
We are seeking a Lead Corporate Data Officer to join our Corporate Data Office Division (CDOD) within our Cybersecurity Organization. CDOD focuses on AI governance, including the design and implementation of robust AI governance frameworks that ensure the ethical, transparent, and compliant use of AI and machine learning technologies. Your primary role is to oversee the establishment and governance of key areas within the Corporate Data Strategy, approving the defined key performance indicators of data management and compliance. In addition, you will manage an enterprise-wide data governance framework.
Responsibilities
- Create and enforce policies for responsible AI development, deployment, and monitoring, ensuring alignment with global regulatory standards
- Design and implement comprehensive AI governance frameworks to ensure the ethical, transparent, and compliant use of AI/ML technologies
- Identify and appropriately manage AI risks through the organization’s risk governance process
- Support the AI governance process, including management of the AI inventory, governance meetings, and management reporting
- Align AI governance strategies with organizational objectives, regulatory requirements, and industry standards
- Conduct AI risk assessments and develop risk mitigation strategies that align with the organization’s risk appetite
- Monitor emerging regulatory trends and ensure proactive adaptation of governance policies to maintain compliance with evolving legal and ethical standards
- Develop and deliver awareness programs to educate stakeholders on AI governance principles, policies, and compliance requirements
- Provide insights and recommendations on governance-related challenges and opportunities
- Oversee periodic assessments of AI systems to validate adherence to governance standards and regulatory obligations
- Conduct impact assessments of AI systems, including the identification of potential biases, discrimination, and negative impacts on Saudi Aramco, individuals and society
- Establish AI evaluation processes
- Research AI safety, fairness, bias, transparency, and explainability
Qualifications
- Bachelor degree in Computer Science, Data Science, Engineering, or a related field. A Master’s degree in Machine Learning, Data Science, or a related field is preferred.
- Minimum of 15 years of experience in application development, system administration, and data management, including 10 years of experience implementing AI, computer vision, NLP, and LLM solutions.
- 5 years of experience designing and implementing AI governance and risk management programs is required.
- Demonstrated strong knowledge in responsible AI and AI evaluations, bias detection, data-centric approaches, Agentic AI, AIOps, data privacy by design, LLM red-teaming, methodologies for explainability and interpretability, and cloud computing environments.