The Personal Framework

Most advice about AI workflows starts with the tool. "Use this prompt." "Try this agent." "Install this MCP server." The problem is that tools change every month, but your workflow is yours. If you build around a tool, you rebuild every time the tool changes. If you build around your workflow, the tools become interchangeable.

This is a framework for improving your own workflow with AI. It is not about any specific tool. It is about understanding what you do, where AI fits, and how to build the automation around your actual process.

The goal is not to replace your workflow with AI. The goal is to keep your workflow and let AI handle the parts that do not require your judgment.

Step 1: Document What You Do Today

Before you automate anything, write down what you actually do. Not what you think you should do. Not what the process document says. What you actually do, step by step, when you sit down to work.

Be specific. If your workflow for implementing a feature looks like this, write it down exactly:

  • Read the ticket and acceptance criteria.
  • Check the existing codebase for related patterns.
  • Create a branch.
  • Write the implementation.
  • Write tests.
  • Run tests locally.
  • Fix failing tests.
  • Write the PR description.
  • Create the PR.
  • Respond to review comments.
  • Merge after approval.

Do the same for every workflow you repeat: bug fixing, code review, documentation, release process, on-call debugging. The more specific you are, the more clearly you will see where AI fits and where it does not.

Most people skip this step because it feels obvious. But writing it down reveals steps you forgot you were doing, steps that take longer than you thought, and steps that are purely mechanical.

Step 2: Identify What AI Can Handle

Go through each step in your documented workflow and mark it with one of two labels: "I do this" or "AI can do this."

AI is good at:

  • Boilerplate code โ€” scaffolding, repeated patterns, standard implementations.
  • Writing tests โ€” unit tests, edge case generation, test data creation.
  • Documentation โ€” PR descriptions, code comments, API docs, changelogs.
  • Code review checklists โ€” catching common issues, style violations, missing error handling.
  • Research โ€” finding relevant code, scanning documentation, summarising large files.
  • Repetitive edits โ€” renaming across files, updating imports, migrating patterns.

AI is not yet reliable at:

  • Architecture decisions โ€” choosing between approaches requires business context AI does not have.
  • Ambiguous requirements โ€” when the spec is vague, AI guesses instead of asking for clarification.
  • Business context โ€” understanding why a feature matters, who uses it, what the trade-offs mean.
  • Cross-system reasoning โ€” understanding how a change in one service affects another service three hops away.
  • Judgment calls โ€” when to cut scope, when to push back on requirements, when to ship despite imperfection.

The dividing line is simple: if the step requires context that only you have, you do it. If the step can be defined clearly enough that a junior developer could follow instructions to complete it, AI can do it.

The question is not "Can AI do this?" The question is "Can I define this clearly enough that AI will do it correctly?"

Step 3: Map the Handoff Points

Now you know which steps you do and which steps AI handles. The next step is defining the boundaries โ€” the handoff points where control passes between you and the agent.

A handoff point is any moment where:

  • You give the agent a task and it starts working.
  • The agent finishes and you review the output.
  • The agent needs a decision it cannot make alone.
  • You need to provide context that the agent does not have.

Handoff points are where hooks live. Each handoff is an opportunity for automation:

  • Before the agent starts โ€” a PreSession hook loads context so you do not have to explain it manually.
  • When the agent finishes โ€” a PostTask hook runs quality checks so you do not review obvious issues.
  • When the agent calls a dangerous tool โ€” a PostToolUse hook gates the action so you only intervene when it matters.
  • When you approve the work โ€” a PostApproval hook captures learnings so the next session is better.

The cleaner your handoff points, the smoother the workflow. If you find yourself constantly interrupting the agent mid-task to provide context, the handoff point is in the wrong place. Move it earlier โ€” give the agent the context before it starts.

Step 4: Build the Agent Around YOUR Workflow

Now you have your documented workflow, the AI-suitable steps identified, and the handoff points mapped. The final step is encoding this into your agent configuration.

The tools for this are:

  • CLAUDE.md โ€” your project instructions. This is where you encode decisions, conventions, and constraints that the agent must follow. It is the "how we do things here" document.
  • Skills โ€” specialist configurations for specific task types. A backend skill knows your service architecture. A test skill knows your testing patterns. A planner skill knows your ticket format.
  • Hooks โ€” the automation that runs at handoff points. PreSession loads context. PostTask enforces quality. PostToolUse gates actions. PostApproval captures learnings.
  • MCP servers โ€” the connections to external tools. GitHub for PRs. Database for queries. Browser for testing. Slack for notifications.

The critical insight is: build this around YOUR process. Do not adopt someone else's workflow because it looks impressive. If your workflow is different from the popular AI dev workflow on Twitter, that is fine. The best system is the one that matches how you actually work, not how someone else works.

Your workflow is your competitive advantage. AI accelerates it. It does not replace it.

Real Examples

Work workflow (engineering tasks):

In a typical implementation workflow, the handoff points look like this:

  • You read the ticket and make architecture decisions (human step).
  • You hand the spec to the agent with clear acceptance criteria (handoff).
  • The agent implements, writes tests, and creates a PR (AI steps).
  • A review agent (different instance) checks the work (AI step with separation of concerns).
  • You review the PR and approve or request changes (human step).
  • The agent captures learnings from the session (automated via hook).

Content workflow (writing, documentation):

The handoff points are completely different:

  • You decide what to write about and outline the key points (human step).
  • You hand the outline to the agent with tone, audience, and format instructions (handoff).
  • The agent produces a first draft (AI step).
  • You edit for voice, accuracy, and nuance (human step).
  • The agent formats, creates metadata, and publishes (AI steps).

The workflows are completely different, but the framework is the same: document, identify, map handoffs, build.

The Rule

The rule is simple, and the order matters:

Document first. Standardise second. Automate third.

If you automate before you standardise, you automate chaos. If you standardise before you document, you standardise assumptions. If you document first, you see clearly what to standardise. If you standardise clearly, you know exactly what to automate.

This framework is not about any specific AI tool. Tools will change. Models will improve. New capabilities will appear. But your workflow โ€” the way you think about problems, the order in which you work, the handoff points where your judgment matters โ€” that is yours. Build around it, and every new tool becomes an upgrade, not a disruption.