The Loop That Gets Smarter Every Iteration

Most AI-assisted development is linear. You prompt, the AI responds, you use the output or discard it. Each interaction is independent. Nothing compounds. Nothing improves automatically.

But there is a different way to work with AI agents, a way where every task makes the system smarter, every failure teaches a lesson that is automatically captured, and every success creates a pattern that can be repeated without re-explanation.

This is the compounding loop: Plan, Delegate, Assess, Codify. Each cycle leaves the system better than it found it.

The Compounding Loop

The compounding loop, popularised by Kieran Klaassen, is a four-step cycle that transforms AI agent usage from linear to exponential:

1. Plan

Before delegating anything, break the task down. Define what success looks like. Identify which skills, tools, and context the agent will need. Planning is not overhead; it is the investment that makes everything else faster.

2. Delegate

Assign the task to the right agent or skill. Not every task needs the same model, the same approach, or the same level of autonomy. Delegation is about matching the task to the right capability.

3. Assess

Review the output. Did the agent meet the success criteria defined in the planning step? What worked? What failed? What was surprising? Assessment is not just quality control; it is data collection for the next step.

4. Codify

This is where the compounding happens. Take what you learned from the assessment and encode it back into the system. Update CLAUDE.md with new conventions. Create a SKILL.md for a workflow that worked well. Add a gotcha to the knowledge bank. Fix a constraint that was missing.

Every cycle through the loop leaves the system with better context, more skills, and fewer blind spots. The tenth task is dramatically faster than the first, not because you got better at prompting, but because the system got better at executing.

Backpressure: The Self-Correction Mechanism

Backpressure is the concept that makes autonomous AI agents reliable. Without it, agents drift, hallucinate, and produce plausible-looking garbage. With it, agents self-correct.

Backpressure comes from hard constraints that the agent cannot bypass:

  • Type systems: TypeScript, Zod schemas, and strict type checking catch structural errors before they reach production.
  • Tests: Unit tests, integration tests, and end-to-end tests define correct behaviour. The agent must produce code that passes them.
  • Linters: ESLint, Prettier, and similar tools enforce formatting and convention compliance automatically.
  • Pre-commit hooks: Gates that prevent non-compliant code from entering the repository.

The key insight: constraints are better than checklists. A checklist asks the agent to remember. A constraint forces the agent to comply. Type errors, test failures, and lint violations are not obstacles. They are the feedback loop that makes autonomous agents trustworthy.

When an agent generates code that fails the type checker, it does not need a human to tell it what went wrong. The error message is the feedback. The agent reads it, fixes the issue, and tries again. This is self-correction through backpressure.

The Ralph Loop

The Ralph Loop takes the compounding loop concept to its logical extreme: a fully autonomous agent loop that runs repeatedly until a product requirement document (PRD) is complete.

How it works:

  1. A PRD defines the complete set of requirements for a feature or task.
  2. An agent instance reads the PRD and works through each item.
  3. When the instance finishes (or gets stuck), it writes its progress to a state file.
  4. A fresh agent instance is spawned, reads the state file, and continues where the previous one left off.
  5. This repeats until all PRD items are marked complete.

The critical design choice: each iteration uses a fresh instance. This prevents context window pollution, avoids compounding hallucinations, and gives each iteration a clean start with only the essential state.

The Ralph Loop is not theoretical. It is how production-grade agent orchestrators handle long-running tasks that exceed a single context window.

Why Documentation Must Come First

The compounding loop only works if there is something to codify into. And backpressure only works if the constraints are documented and enforced. This leads to a fundamental principle:

Document first. Standardise second. Automate third.

  • Document first: Write down your conventions, architecture decisions, and workflows. This is CLAUDE.md, SKILL.md, and your knowledge bank. Without documentation, there is nothing for the agent to learn from or be constrained by.
  • Standardise second: Turn documentation into enforceable standards. Linter rules, type schemas, test suites, and pre-commit hooks. Standards are documentation with teeth.
  • Automate third: Only automate what is documented and standardised. Automating undocumented processes creates brittle, opaque systems. Automating standardised processes creates reliable, self-correcting systems.

Most teams try to automate first and document later. This is backwards. The documentation is not the overhead; it is the foundation that makes automation possible and reliable.

Conclusion

The compounding loop (Plan, Delegate, Assess, Codify) transforms AI agent usage from a series of independent interactions into a self-improving system. Backpressure through type systems, tests, and linters makes that system trustworthy. The Ralph Loop shows how far autonomy can go when the foundation is solid.

But none of it works without documentation. The engineers who get exponential value from AI agents are not the ones with the best prompts or the fastest models. They are the ones who invested in documenting their systems first, and then built automation on top of that foundation.

Every task should leave the system smarter than it found it. That is the compounding loop. That is what changes everything.