Enable job alerts via email!
A leading technology company located in Mountain View, CA is seeking a Principal Engineering Analyst for its Trust & Safety team. This role involves pioneering efforts in AI Red Teaming, ensuring safe content across platforms by analyzing data and managing strategic initiatives. The ideal candidate will possess extensive experience in data analysis and project management, with a background in AI technologies. This position offers a competitive salary and benefits package.
Apply
Fast-paced, dynamic, and proactive, YouTube’s Trust & Safety team is dedicated to making YouTube a safe place for users, viewers, and content creators around the world to create, and express themselves. Whether understanding and solving their online content concerns, navigating within global legal frameworks, or writing and enforcing worldwide policy, the Trust & Safety team is on the frontlines of enhancing the YouTube experience, building internet safety, and protecting free speech in our ever-evolving digital world.
We are seeking a pioneering expert in AI Red Teaming, with technical proficiency, to shape our approaches to adversarial testing of Google's generative AI products.
You will blend your domain expertise in GenAI red teaming and adversarial testing with technical acumen, driving creative and ambitious solutions to tests, ultimately preventing abusive content or uses of our products. You will demonstrate an ability to grow in a changing dynamic research and product development environment.
Combining red teaming and technical experience will enable you to design and direct operations, creating innovative methodologies to uncover novel content abuse risks, while supporting the team in the design, development and delivery of technical solutions to testing and process limitations. You will be a key advisor to executive leadership, leveraging your influence across Product, Engineering, and Policy teams, driving initiatives.
You will mentor analysts, fostering a culture of continuous learning and sharing your expertise in adversarial techniques. You will also represent Google's AI safety efforts in external forums, collaborating with industry partners to develop best practices for responsible AI and solidifying our position as a thought leader in the field.
Fast-paced, dynamic, and proactive, YouTube’s Trust & Safety team is dedicated to making YouTube a safe place for users, viewers, and content creators around the world to create, and express themselves. Whether understanding and solving their online content concerns, navigating within global legal frameworks, or writing and enforcing worldwide policy, the Trust & Safety team is on the frontlines of enhancing the YouTube experience, building internet safety, and protecting free speech in our ever-evolving digital world.
We are seeking a pioneering expert in AI Red Teaming, with technical proficiency, to shape our approaches to adversarial testing of Google's generative AI products.
You will blend your domain expertise in GenAI red teaming and adversarial testing with technical acumen, driving creative and ambitious solutions to tests, ultimately preventing abusive content or uses of our products. You will demonstrate an ability to grow in a changing dynamic research and product development environment.
Combining red teaming and technical experience will enable you to design and direct operations, creating innovative methodologies to uncover novel content abuse risks, while supporting the team in the design, development and delivery of technical solutions to testing and process limitations. You will be a key advisor to executive leadership, leveraging your influence across Product, Engineering, and Policy teams, driving initiatives.
You will mentor analysts, fostering a culture of continuous learning and sharing your expertise in adversarial techniques. You will also represent Google's AI safety efforts in external forums, collaborating with industry partners to develop best practices for responsible AI and solidifying our position as a thought leader in the field.
The US base salary range for this full-time position is $174,000-$258,000 + bonus + equity + benefits. Our salary ranges are determined by role, level, and location. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range for your preferred location during the hiring process.
Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more about benefits at Google .
Google is proud to be an equal opportunity and affirmative action employer. We are committed to building a workforce that is representative of the users we serve, creating a culture of belonging, and providing an equal employment opportunity regardless of race, creed, color, religion, gender, sexual orientation, gender identity/expression, national origin, disability, age, genetic information, veteran status, marital status, pregnancy or related condition (including breastfeeding), expecting or parents-to-be, criminal histories consistent with legal requirements, or any other basis protected by law. See also Google's EEO Policy , Know your rights: workplace discrimination is illegal , Belonging at Google , and How we hire .
Google is a global company and, in order to facilitate efficient collaboration and communication globally, English proficiency is a requirement for all roles unless stated otherwise in the job posting.
To all recruitment agencies: Google does not accept agency resumes. Please do not forward resumes to our jobs alias, Google employees, or any other organization location. Google is not responsible for any fees related to unsolicited resumes.