Stop Guessing. Audit Yourself.
I wrote about the 10 Levels of Agentic Engineering as a framework for understanding where you stand. The response was clear: people wanted a way to measure it, not just read about it.
So I built one. The Agentic Skill Auditor is an open-source AI agent skill that scans your machine and project, scores you across 9 weighted categories, and tells you exactly where you sit on the 0–900 scale.
No self-assessment bias. No guessing. The auditor reads your actual configuration and gives you a score based on what it finds.
What It Does
The auditor is a bash script (scan.sh) that your AI agent runs. It inspects your file system for evidence of agentic engineering practices: context files, MCP servers, custom skills, memory systems, hooks, CI integrations, multi-agent configurations, background processes, and orchestration infrastructure.
It produces two outputs:
- A visual report with coloured progress bars, score breakdowns, and level placement — designed for humans.
- Structured JSON embedded in the output — designed for your AI agent to consume and generate personalised recommendations.
The dual output is intentional. You get the dashboard. Your agent gets the data to tell you what to do next.
The 9 Categories
Each category is weighted based on its impact on agentic maturity:
- Context Setup (1.5x) — Rules files like CLAUDE.md, .cursorrules, copilot-instructions.md
- Tool Connections / MCP (1.0x) — Model Context Protocol servers pulling live data
- Skills & Commands (1.0x) — Custom repeatable workflows in .claude/skills/ or equivalent
- Memory & Compounding (1.0x) — Persistent knowledge across sessions
- Feedback Loops & Hooks (1.5x) — Automated guardrails: linters, tests, AI hooks
- Pipeline / Headless (1.0x) — CI/CD integration and script-based agent invocation
- Multi-Agent / Multi-Model (1.0x) — Multiple AI tools with cost-aware routing
- Background / Always-On (0.5x) — Cron jobs and daemon processes
- Orchestration / Swarm (0.5x) — Agent-to-agent coordination
Context Setup and Feedback Loops carry the highest weight because they have the highest leverage. A great context file saves hours every week. Automated hooks prevent entire categories of mistakes.
How to Download
The auditor is a public GitHub repository. You can either clone it or download the zip.
Option 1: Git Clone
git clone https://github.com/Oaseru17/agentic-skill-auditor.git
Option 2: Download ZIP
Download the zip directly from GitHub: agentic-skill-auditor.zip. Unzip it into your skills directory.
If you use Claude Code, place it in your skills directory:
# Navigate to your skills directory
cd ~/.claude/skills/
# Option A: Clone
git clone https://github.com/Oaseru17/agentic-skill-auditor.git
# Option B: Download and unzip
curl -L -o auditor.zip https://github.com/Oaseru17/agentic-skill-auditor/archive/refs/heads/main.zip
unzip auditor.zip && mv agentic-skill-auditor-main agentic-skill-auditor
rm auditor.zip
That is it. No npm install, no dependencies, no build step. The auditor is a self-contained bash script with a knowledge file.
Requirements:
- Bash 3.2+ (default on macOS and Linux)
- Python 3 (for JSON processing)
- curl (optional)
How to Use It
Once cloned, ask your AI agent to run the audit. You can phrase it however feels natural:
- “Audit my agentic workflow on this project”
- “Scan my agentic engineering setup”
- “Run the agentic skill auditor”
- “Use /agentic-skill-auditor to check my setup”
The agent will execute scan.sh, read the structured JSON output, and deliver a tailored report with specific recommendations based on your detected tools and current level.
The auditor detects 11 AI tools automatically: Claude Code, Cursor, GitHub Copilot, Windsurf, Aider, OpenAI Codex CLI, Cline, Continue.dev, Amazon Q Developer, Tabnine, and Supermaven.
Recommendations are personalised. If you use Cursor, you get Cursor-specific next steps. If you use Claude Code, you get Claude Code-specific guidance. The auditor does not give generic advice.
What You Get
After the scan completes, you receive:
- Total score out of 900
- Per-category breakdown showing where you scored and where you lost points
- Your level on the 10-level scale with a clear definition of what it means
- Next level requirements — exactly what you need to do to advance
- Tool-specific recommendations based on the AI tools detected on your machine
The 10 Levels
| Level | Title | Score |
|---|---|---|
| 0 | Terminal Tourist | 0–99 |
| 1 | Grounded | 100–199 |
| 2 | Connected | 200–299 |
| 3 | Skilled | 300–399 |
| 4 | Compounding Architect | 400–499 |
| 5 | Harness Builder | 500–599 |
| 6 | Pipeline Engineer | 600–699 |
| 7 | Multi-Agent Operator | 700–799 |
| 8 | Always On | 800–849 |
| 9 | Swarm Architect | 850–900 |
For the full breakdown of what each level means and how to progress, read the companion post: The 10 Levels of Agentic Engineering.
Web Quiz
If you prefer not to install anything, there is also a web-based version of the assessment at oaseru.dev/audit. The web quiz is a self-assessment, so it relies on your honest answers rather than scanning your machine. The CLI auditor is more accurate because it verifies what actually exists.
Conclusion
The gap between engineers who use AI casually and those who have built agentic engineering systems is growing every week. The first step to closing that gap is knowing where you stand.
Clone the repo. Run the audit. See your score. Then pick one category and improve it. That is how you level up.
The auditor is open source and MIT licensed. If you want to add detection for new tools, improve the scoring logic, or expand the scan categories, contributions are welcome on GitHub.