Designing Agent-Ready Projects#

An “agent-ready” project is just a well-documented project. Every practice that helps an agent — clear conventions, explicit commands, tracked progress, documented decisions — also helps a new team member, a future-you who forgot the details, or a contractor picking up the project for the first time.

The difference is that humans can ask follow-up questions and gradually build context through conversation. Agents cannot. They need it written down, in the right place, at the right level of detail. Projects that meet this bar are better for everyone.

What Makes a Project Hard for Agents#

Agents fail on projects where context lives in people’s heads instead of files:

Context Type Where It Lives (Bad) Where It Should Live (Good)
“We use Postgres 15 with FTS5” Someone’s memory CLAUDE.md or README
“Run tests with make test Tribal knowledge CLAUDE.md or Makefile help target
“We chose D1 over Postgres because…” A Slack conversation from last month decisions-log.md or ADR
“Deploy by running these 5 commands in order” A wiki page nobody updates A skill file or Makefile target
“Don’t use the old API — we’re migrating to v2” Verbal warning from a teammate CLAUDE.md conventions section
“The staging env password is…” A DM Secrets manager reference in docs

Every row in the “bad” column is also bad for new human contributors. Agents just make the cost visible because they cannot absorb context from hallway conversations.

The Agent-Ready Project Structure#

A project that works well with agents has these files at minimum:

project/
├── CLAUDE.md              # What an agent needs to know (or a new team member)
├── TODO.md                # What needs to be done, in order
├── README.md              # What the project is (for humans browsing GitHub)
├── .claude/
│   ├── MEMORY.md          # Learned facts that persist across sessions
│   └── skills/            # Repeatable multi-step procedures
│       └── deploy.md
├── docs/
│   └── decisions/         # Why we chose X over Y
│       └── 001-database-choice.md
└── src/                   # The actual code

Not every project needs every file. A small script needs a CLAUDE.md with 5 lines. A multi-service platform needs all of it. Match the structure to the project’s complexity.

CLAUDE.md: The Single Most Important File#

CLAUDE.md is loaded automatically at the start of every agent session. It is the project’s operating manual — the minimum context needed to work effectively.

What Goes in CLAUDE.md#

# Project Name

## Stack
- Language: TypeScript
- Framework: Cloudflare Workers
- Database: D1 (SQLite)
- Cache: KV namespace
- Hosting: Cloudflare Pages + Workers

## Key Commands
- Build site: `cd site && hugo`
- Run tests: `npm test`
- Deploy API: `cd api && npx wrangler deploy`
- Deploy site: `cd site && npx wrangler pages deploy public`

## Conventions
- All API responses use the json() helper for consistent CORS headers
- Use prepared statements for all D1 queries (never string interpolation)
- Hash client IPs with SHA-256 before logging (privacy)
- Rate limiting: 60 req/min per IP via KV with TTL
- Error responses: { error: string, status: number }

## Architecture
- api/src/index.ts — single-file Worker (all routes)
- site/ — Hugo static site (Book theme)
- api/schema/ — D1 migrations (numbered, applied in order)

What Does NOT Go in CLAUDE.md#

  • Secrets or credentials — reference where they are, never include them
  • Full API documentation — link to it, do not duplicate it
  • Historical decisions — put those in docs/decisions/
  • Temporary state — that belongs in TODO.md or checkpoint files
  • Things that change weekly — CLAUDE.md should be stable

The Test: Would a New Hire Benefit?#

Before adding something to CLAUDE.md, ask: “Would a new team member need to know this on day one?” If yes, include it. If it is too detailed for day one but needed eventually, put it in a linked document.

TODO.md: Progress as a State Machine#

A TODO.md file serves three audiences simultaneously:

  1. You: What am I working on? What is next?
  2. Agents: What should I do next? What is already done?
  3. Collaborators: What is the project’s status?

Effective TODO Structure#

# TODO

## Phase 1: Foundation [COMPLETE]
- [x] Set up project structure
- [x] Configure database schema
- [x] Deploy initial version

## Phase 2: Core Features [IN PROGRESS]
- [x] Search endpoint with FTS5
- [x] Category listing
- [ ] Feedback submission  <-- CURRENT
- [ ] Rate limiting middleware
- [ ] Request logging

## Phase 3: Polish [PENDING]
- [ ] Error page improvements
- [ ] Performance optimization
- [ ] Documentation

Key conventions:

  • Phases are ordered and labeled with status
  • Items within phases are ordered by dependency
  • <-- CURRENT marks the active item (agents look for this)
  • Completed items stay visible (context for what is already done)

Why Checkboxes Beat Issue Trackers (for Agent Context)#

Issue trackers (Jira, GitHub Issues, Linear) are excellent for team coordination. But an agent cannot efficiently query “what is the current sprint’s priority?” from a 500-issue backlog. A TODO.md file is:

  • Immediate: 200-500 tokens, loaded in milliseconds
  • Ordered: Priority is implicit in position
  • Self-contained: No API calls, no authentication, no query language
  • Editable: Agent updates the file as it completes work

Use both. The issue tracker is the system of record for the team. TODO.md is the working context for the agent.

Decisions Log: Preventing Re-Derivation#

The most expensive context loss is re-deriving a decision that was already made. If the agent does not know why you chose Postgres over SQLite, it might suggest SQLite. If it does not know you already evaluated and rejected Redis for caching, it might propose Redis again.

Architecture Decision Records (ADRs)#

# ADR-001: Database Choice

## Status: Accepted
## Date: 2026-02-15

## Context
We need a database for the knowledge API. Options: PostgreSQL (managed),
SQLite via D1 (serverless), or DynamoDB (NoSQL).

## Decision
D1 (SQLite) on Cloudflare.

## Rationale
- Free tier sufficient for our scale (5M reads, 100K writes/day)
- No server to manage — serverless, auto-scaling
- FTS5 for full-text search built into SQLite
- Co-located with Workers — minimal latency
- $0.75/million reads beyond free tier

## Rejected Alternatives
- PostgreSQL: Requires managed instance ($15-50/mo minimum), separate hosting
- DynamoDB: No full-text search, AWS lock-in, complex pricing

## Consequences
- Limited to SQLite SQL dialect
- Single-region writes (D1 limitation as of Feb 2026)
- No JOINs across databases — all data in one D1 database

An agent reading this file knows: we chose D1, why, what was rejected, and what the tradeoffs are. It will not suggest PostgreSQL or spend tokens evaluating options that were already evaluated.

Cost: 10 minutes to write. Savings: Every future session where this question would have been revisited.

Skills Files: Repeatable Procedures#

Any multi-step process that happens more than once should be a skill file. Deployments, database migrations, content syncing, environment setup — all of these involve specific commands in specific order with specific gotchas.

# Skill: Deploy to Production

## Prerequisites
- Hugo site builds without errors
- All tests pass
- Content sync is current

## Steps
1. Build Hugo site
   ```bash
   cd site && hugo
  1. Generate content sync SQL

    cd api && npx tsx scripts/sync-content.ts 1> schema/content-sync.sql
  2. Apply to production D1

    source ~/.claude/secrets/agent-zone.env
    cd api && npx wrangler d1 execute agent-zone-db --remote --file=schema/content-sync.sql
  3. Deploy Worker API

    cd api && npx wrangler deploy
  4. Deploy Pages site

    cd site && npx wrangler pages deploy public --project-name=agent-zone --commit-dirty=true
  5. Verify

    curl -s "https://api.agent-zone.ai/health" | jq .

Known Gotchas#

  • sync-content.ts writes status to stderr — redirect stdout only (1>) to avoid SQL parse errors
  • CLOUDFLARE_API_TOKEN must be exported, not just sourced, in non-interactive shells
  • Pages project must exist first (wrangler pages project create if new)

Without this file, the agent re-derives the deploy process every time — reading config files, checking documentation, making guesses about command order. With this file, it executes the exact correct sequence in seconds.

## MEMORY.md: Institutional Knowledge

MEMORY.md captures things that are true about the project or environment that are not conventions (those go in CLAUDE.md) but are useful across sessions.

```markdown
# Memory

## Platform
- Mac Mini M4 Pro (ARM64)
- QEMU cannot run Go binaries on this platform — must use native ARM64 images
- Minikube with Docker driver — containers run natively

## Discovered Gotchas
- Bitnami Helm charts name resources using the release name directly
- PostgreSQL 15 changed default permissions — must GRANT on public schema
- Mattermost has no official ARM64 Docker image
- Hugo Book theme requires Go module syntax for remote themes

## What Works
- D1 FTS5 search with rank ordering performs well up to 10K articles
- KV cache with 5-minute TTL is sufficient for search result caching
- Single-file Worker pattern scales to ~1000 lines before needing refactor

The distinction from CLAUDE.md: CLAUDE.md is prescriptive (“do this”), MEMORY.md is descriptive (“this is true”). CLAUDE.md is curated and stable. MEMORY.md grows organically as the agent and human learn things.

The Onboarding Test#

A project is agent-ready when the following test passes:

1. Start a fresh agent session (no prior context)
2. Ask: "What is this project and how do I work on it?"
3. Agent reads CLAUDE.md and TODO.md
4. Agent should be able to answer:
   - What the project does
   - What technology stack it uses
   - What commands to build/test/deploy
   - What needs to be done next
   - What conventions to follow

If the agent cannot answer these from the project files, a new human contributor cannot either. Fix the files, not the agent.

Incremental Adoption#

You do not need to create everything at once. Start with what gives the most immediate return:

Priority File Time When It Pays Off
1 CLAUDE.md 10 min Next session
2 TODO.md 5 min Next session
3 First skill file 10 min Next time you run that procedure
4 MEMORY.md 5 min Accumulates value over weeks
5 First ADR 10 min Next time someone asks “why did we choose X?”
6 Spec document template 5 min First sub-agent delegation

Total for priorities 1-2: 15 minutes. This handles 80% of the context problem.

Total for all 6: 45 minutes. This handles 95% of the context problem and enables multi-session autonomous workflows.

The remaining 5% is project-specific — you discover it as you work and add it incrementally. A project is never “done” being agent-ready, just like it is never “done” being well-documented. But the first 15 minutes get you most of the way there.