Tool Use Patterns#
An agent with access to 30 tools is not automatically more capable than one with 5. What matters is how it selects, sequences, and validates tool use. Poor tool use wastes tokens, introduces latency, and produces wrong results that look right.
Choosing the Right Tool#
When multiple tools could handle a task, the agent must pick the best one. This is harder than it sounds because tool descriptions are imperfect and tasks are ambiguous.
Match specificity to task. If the agent needs to find a function definition, a code-aware search tool (like AST-based grep) beats a generic text search. If it needs to read a known file, a direct file-read tool beats a search. Use the most specific tool available.
Consider cost and side effects. A tool that queries a production database is more expensive (and riskier) than one that queries a read replica. A tool that modifies files has side effects that a read-only tool does not. Prefer cheaper, safer tools when they provide the same information.
Bad tool selection:
Task: "What version of Python does this project use?"
Action: Run a shell command `python --version`
Problem: Reports the system Python, not the project's version.
Better: Read pyproject.toml or .python-version file.Bad tool selection:
Task: "Find all files importing the requests library"
Action: Use glob to find *.py files, then read each one looking for imports.
Problem: Reads potentially hundreds of files sequentially.
Better: Use grep/search tool with pattern "import requests|from requests".Tool Chaining: Output as Input#
The most powerful tool use patterns chain tools together, using the output of one as the input to the next. The key is that each step should produce structured output that the next step can consume without guessing.
Linear chain – each tool feeds the next:
1. Search for files containing "DatabaseError" -> list of file paths
2. Read each file -> source code with line numbers
3. Analyze code -> structured list of error handling patternsFan-out/fan-in – one result feeds multiple parallel tools, then results merge:
1. Get list of microservices from config -> [service-a, service-b, service-c]
2. (parallel) Check health of each service -> [healthy, unhealthy, healthy]
3. Aggregate results -> "service-b is down, others healthy"Conditional chaining – the next tool depends on the previous result:
1. Check if file exists -> yes/no
2a. If yes: read and modify the file
2b. If no: create the file from a templateWhen chaining, validate intermediate results before passing them forward. A search that returns zero results should not cause the next tool to process an empty list silently.
Parallel Tool Execution#
Independent tool calls should run in parallel. If the agent needs to read three files, it should issue three read calls simultaneously rather than sequentially. This reduces latency by the number of parallel calls.
Rules for parallelization:
- Safe to parallelize: Reads from different sources, searches in different directories, API calls to different services.
- Must be sequential: Write after read (to the same file), delete after checking existence, any operation where the second call depends on the first result.
- Risky to parallelize: Multiple writes to the same system (race conditions), calls that share rate limits, operations that must happen in a specific order for correctness.
Good: Read file A, read file B, search for pattern C -> all independent, run in parallel.
Bad: Read file A, modify file A, read file A again -> sequential dependency.
Bad: Call API endpoint 1, call API endpoint 2 (same service, shared rate limit)
-> may be safe but could hit rate limits. Consider sequential with small delay.Handling Tool Failures#
Tools fail. The agent needs a strategy beyond “try again.”
Distinguish tool failure from empty results. A search tool that returns zero matches is not a failure – the pattern just does not exist in the codebase. A search tool that returns a network error is a failure. The agent’s response should differ: “No matches found” versus “Search failed, trying alternative.”
Try alternative tools. If a structured code search fails, fall back to text grep. If an API call fails, check whether there is a cached version or a different endpoint.
Report what you tried. When all alternatives fail, tell the user which tools you attempted and what errors each returned. This prevents the user from suggesting approaches the agent already tried.
Pattern: Fallback chain
async function findDefinition(symbol: string): Promise<Result> {
// Try most specific tool first
try {
return await tools.astSearch({ symbol, type: "definition" });
} catch (e) {
// Fall back to regex search
try {
return await tools.grepSearch({
pattern: `(function|class|const|let|var)\\s+${symbol}`,
});
} catch (e2) {
return {
found: false,
error: `AST search failed (${e.message}), regex search also failed (${e2.message})`,
};
}
}
}Tool Result Validation#
Never blindly trust tool output. Validate before using it in downstream reasoning or passing it to the next tool.
Type checking. If you expect a list of file paths, verify the result is actually a list and each entry looks like a path. A tool might return an error message as a string where you expected structured data.
Sanity checks. If a tool returns a file with 0 bytes, that is probably wrong. If a search returns 10,000 matches, the pattern is too broad. If an API returns a timestamp from 1970, the data is likely a default value.
Cross-validation. When the result is critical, verify it with a second tool. If a search says a function is defined in utils.py, read that file and confirm the function actually exists there.
# Bad: trust the search result blindly
files = await search_tool(pattern="def process_payment")
# Immediately start modifying the first file found
# Good: validate before acting
files = await search_tool(pattern="def process_payment")
if not files:
return "Function not found in codebase"
# Confirm the function actually exists in the reported file
content = await read_file(files[0])
if "def process_payment" not in content:
# Search result was stale or wrong
return f"Search reported {files[0]} but function not found there"When Not to Use Tools#
Sometimes the best tool use is no tool use. Agents should reason about whether a tool call is actually necessary.
Do not look up what you already know. If the user just showed you the contents of a file, do not re-read it. If a previous tool call returned the answer, do not call the tool again.
Do not use tools for reasoning. If the question is “which of these two approaches is better,” that is a reasoning task, not a tool task. Calling a search tool will not help you evaluate tradeoffs.
Do not use tools for formatting. Converting JSON to YAML, reformatting a table, or summarizing text are all things the agent can do directly. Calling an external tool for string manipulation wastes a round trip.
Do not use tools speculatively. “Let me search for this just in case” burns tokens and time. Have a reason for every tool call. If you cannot articulate what you expect to find and how it will help, do not make the call.
The best agents use the fewest tools necessary to complete the task correctly.