
¡Activa las notificaciones laborales por email!
Genera un currículum adaptado en cuestión de minutos
Consigue la entrevista y gana más. Más información
A dynamic tech company is seeking a QA Engineer to lead the full QA lifecycle for Agentic AI products while ensuring high-quality software delivery. The role involves designing test plans, performing manual and automated assessments, and collaborating with cross-functional teams. Ideal candidates will have experience in testing GenAI products and a strong focus on automation and exploration. They will enjoy a remote-first culture with high autonomy and opportunities for accelerated leadership growth.
Wing is seeking elite talent to join M32 AI (a subsidiary of Wing, backed by top-tier Silicon Valley VCs), dedicated to building agentic AI for traditional service businesses.
Think of it like a startup within a corporate: fast‑moving and agile, with the stability of a corporate, and zero bureaucracy.
If you're driven by challenge and eager to make a significant impact in a high‑caliber role, this is the opportunity you've been waiting for.
Own and evolve the test automation ecosystem that keeps us shipping delightful, bug‑free experiences every week.
This role combines deep manual and UX testing with structured exploratory testing and targeted automation to create a robust quality foundation for our products.
Own the full QA lifecycle for Agentic AI products: strategy, design, execution, reporting, and release sign‑off.
Design and run test plans covering functional, regression, smoke, exploratory, and usability testing for AI behavior and decision chains.
Validate multi‑step decision flows and reasoning to catch logic gaps, guardrail failures, or requirement mismatches.
Perform structured exploratory testing to uncover unexpected behaviors, edge cases, and cascading AI failures.
Build synthetic test scripts for UI elements, APIs, and end‑to‑end flows to verify functionality.
Test across platforms (web, mobile, integrations) for consistency and performance.
Maintain dashboards tracking test coverage, failures, and quality KPIs for all stakeholders.
Improve test reliability: fix flakiness, optimise parallel runs, and cut execution time.
Partner with Product, Design, and Engineering to refine requirements and set clear go / no‑go criteria.
Monitor pre‑ and post‑release quality; use data to enhance AI evaluation and guardrails.
After this, we will be extending the offer to you. We run a short, fast cycle that can be completed in as short as 7 days.
We hire for output instead of pedigree. If your systems never miss a bug, we want you on the team.