Most devs are writing AI specs the same way they wrote PRDs in 2019 and that is the problem.
When your audience was a human dev, a vague requirement was survivable. They’d push back, ask questions in the scrum and use judgment.
When your audience is an AI agent? It fills the gaps with assumptions and it does it fast, confidently, at scale. Teams often assume this is a model problem, understandable given how non-deterministic models are, but rarely investigate whether the real issue is the spec.
We’ve been building with AiDE Agents long enough to see a pattern where the teams getting 5x productivity gains are writing a better spec than the others. A good spec should be more of a contract that includes:
– what gets built
– what doesn’t
– how to verify if it’s right.
With spec-driven development, the approach shifts from “here’s the idea” to “here’s the contract” and this is where most teams leave value on the table.
The other aspect of spec-driven development is breaking work into manageable chunks, each with a clear contract and assigned to individual agents.
Agents are eager by default. You give them too much, and they’ll happily produce a mountain of code that ships your review burden to next week. If you give them one well-scoped task with a clear acceptance criteria, you will get verifiable and parallelizable work.
Spec-driven development, at its core, is about making intent explicit before code turns into slop.
Most teams get here eventually, usually after a painful experience.
The agents are ready. The question is whether your specs are. What’s the hardest part of getting that right in your team?