About this Talk
In the world of AI-native development, evals aren’t just quality gates, they’re powerful design tools. Especially for codegen products, targeted evals can drive clarity, speed, and alignment across teams. This session dives into how code-centric evals can be used to prototype developer-facing AI features, bring cross-functional (XFN) stakeholders into the loop early, and set measurable goals for launch quality. Learn how to evolve your product rapidly with user feedback loops grounded in meaningful code evaluations—treating evals like the “Figma mock” for AI coding tools.