Enable job alerts via email!
Boost your interview chances
Create a job specific, tailored resume for higher success rate.
An innovative firm is seeking a passionate engineer to enhance the safety and accountability of AI systems. This role involves designing tools to evaluate LLM behavior and developing metrics that ensure trust in AI applications. You will collaborate across teams to create a cohesive content safety experience while working on cutting-edge technology. With a flexible hybrid schedule and a mission-driven environment, this opportunity allows you to make a meaningful impact in the evolving landscape of AI. Join a team that values clarity and respect, and help shape the future of responsible AI.
Bonfy.AI | Mountain View, CA | Hybrid
Security for the Age of AI
At Bonfy.AI, we’re building the trust layer for generative AI. Our Adaptive Content Security platform detects and mitigates subtle risks baked into large language model (LLM) outputs—before they make it to the user. From hallucinations to hidden data leaks, we help enterprises use GenAI without compromising truth, privacy, or reputation.
We’re model-agnostic, outcome-focused, and unapologetically rigorous. Our customers include Fortune 500 teams deploying LLMs in high-stakes domains—where trust isn't optional.
We’re looking for an engineer who wants to go deeper than metrics—someone who can analyze model behavior, identify subtle failure modes, and build real-time systems that make AI safer to use. You won’t be tuning models for leaderboard glory; you’ll be making them safer, traceable, and accountable. This is a chance to shape the foundation of how the world trusts AI.
Competitive salary. Generous equity. Flexible hybrid schedule. Health, vision, and dental coverage. And most importantly: a chance to build something meaningful during the most critical phase of AI’s evolution.
Bonfy.AI — Truth. Security. Intelligence.