Why Old-School Requirements Matter More Than Ever
This is a controversial take, and I am fine with that. For the past 20 years, the software industry has been running away from waterfall. "Move fast." "Iterate." "Ship and learn." And for human teams, that approach works. Humans can fill gaps. Humans can infer intent from a vague user story. Humans can walk over to a product manager and ask, "What did you actually mean by this?"
AI agents cannot do any of that.
Agile optimised for human adaptability. AI agents need the thing agile threw away: detailed, upfront requirements.
The discipline that waterfall demanded โ writing out every requirement, every edge case, every user flow before writing a single line of code โ turns out to be exactly what AI agents need to produce reliable output. We did not go backwards. The context changed.
What Agile Skipped
In most agile teams, the unit of work is a user story. A typical user story looks like this:
"As a user, I want to reset my password so that I can regain access to my account."
One sentence. A human developer reads this and fills in dozens of implicit decisions:
- What happens if the email does not exist in the system?
- How long is the reset token valid?
- What happens if the user clicks the link after it expires?
- Can the user reset their password if their account is locked?
- What does the success screen look like?
- What does the error screen look like?
- Is there a rate limit on reset requests?
- Does the user receive a confirmation email after successful reset?
A developer resolves these questions through experience, conversation, and common sense. They ask a product manager. They check how similar features work in the app. They make reasonable assumptions.
An AI agent does none of this. It reads the user story literally. If the story says "reset password," the agent implements a password reset. Everything that is not explicitly stated gets guessed. And AI guesses are not like human guesses โ they are confident, syntactically correct, and often subtly wrong.
The result: code that looks right, passes a superficial review, and breaks in production because an edge case was never specified.
What the Requirements Document Should Contain
A requirements document for AI-assisted development needs to be explicit about everything a human developer would figure out implicitly. At minimum, it should include:
1. The Happy Path
The primary flow from start to finish. Step by step, with every screen, every input, every response defined. No ambiguity.
2. The Error Paths
Every way the happy path can fail. Invalid input, expired tokens, network errors, permission failures, rate limits, duplicate submissions. Each error path needs its own defined behaviour: what error message appears, what status code returns, what the user sees.
3. User Flows and Decision Trees
If the feature has conditional logic (user is logged in vs. not, user is admin vs. regular, user is on mobile vs. desktop), each branch needs its own documented flow. Draw decision trees. Make the branches explicit.
4. Personas
Who is using this feature? A first-time user behaves differently from a power user. An admin has different needs than a regular user. Define the personas and describe how each one interacts with the feature. This gives the AI context for making decisions that user stories leave implicit.
5. Edge Cases
What happens at the boundaries? Empty strings, extremely long inputs, special characters, concurrent requests, race conditions, timezone differences. List them. For each one, define the expected behaviour.
6. Validation Rules
Every input field needs explicit validation: minimum length, maximum length, allowed characters, format requirements, uniqueness constraints, relationship to other fields.
The Comparison
| Aspect | Agile User Story | Functional Requirements Spec |
|---|---|---|
| Length | 1-2 sentences | Multiple pages |
| Happy path | Implied | Explicitly documented step by step |
| Error paths | Rarely mentioned | Every failure mode documented |
| Edge cases | Discovered during development | Listed upfront with expected behaviour |
| Personas | Generic "As a user" | Named personas with specific contexts |
| Validation rules | Left to developer judgment | Explicit for every input |
| Decision logic | Implicit | Decision trees with all branches |
| Acceptance criteria | Brief bullet points | Detailed, testable conditions |
| Works for humans | Yes (humans fill gaps) | Yes |
| Works for AI agents | No (AI guesses at gaps) | Yes (AI has complete information) |
The Persona Concept
Personas deserve special attention because they fundamentally change how an AI agent makes decisions.
Consider a simple feature: "display a dashboard." Without a persona, the AI produces a generic dashboard. With personas, the AI makes different decisions for each one:
James โ backend engineer, power user:
- Wants dense information, minimal whitespace.
- Prefers raw data over visualisations.
- Expects keyboard shortcuts and advanced filters.
- Tolerates complexity in exchange for power.
Sarah โ first-time user, non-technical:
- Needs clear labels and explanations for every metric.
- Expects a guided onboarding flow.
- Prefers visual charts over data tables.
- Needs obvious navigation and help indicators.
The same feature, but the implementation decisions are completely different. Without personas in the spec, the AI has no basis for these decisions. It defaults to a generic middle ground that satisfies nobody.
Personas give the AI the context that human developers get from sitting in the same room as their users, watching how people interact with the product, and absorbing years of domain knowledge. The AI does not have that experience. Personas are the next best thing.
Bottom Line
This is not about going back to waterfall as a project management methodology. Nobody is suggesting six-month planning cycles with gate reviews and change control boards. The cadence stays agile. The iteration stays fast.
What changes is the quality of input to the AI agent. Vague input produces vague output. Detailed input produces precise output. This has always been true for software engineering โ the better the requirements, the better the implementation. But with human developers, "good enough" requirements were often good enough because humans compensated.
AI does not compensate. It does not iterate like humans. It does not ask clarifying questions. It does not fill gaps with experience. It needs the detail upfront.
The irony is that the discipline waterfall demanded โ and that agile discarded as "too slow" โ is exactly what makes AI-assisted development fast and reliable. Write the requirements properly, and the agent builds it correctly the first time. Skip the requirements, and you spend more time fixing the agent's guesses than you saved by not writing them.