Server Instructions: Giving LLMs a User Manual for Your Server
Many of us are still exploring the nooks and crannies of MCP and learning how to best use the building blocks of the protocol to enhance agents and applications. Some features, like Prompts, are frequently implemented and used within the MCP ecosystem. Others may appear a bit more obscure but have a lot of influence on how well an agent can interact with an MCP server. Server instructions fall in the latter category.
The Problem
Imagine you’re a Large Language Model (LLM) who just got handed a collection of tools from a database server, a file system server, and a notification server to complete a task. They might have already been carefully pre-selected or they might be more like what my workbench looks like in my garage—a mishmash of recently-used tools.
Now let’s say that the developer of the database server has pre-existing knowledge or preferences about how to best use their tools, as well as more background information about the underlying systems that power them.
Some examples could include:
- “Always use
validate_schema→create_backup→migrate_schemafor safe database migrations” - “When using the
export_datatool, the file system server’swrite_filetool is required for storing local copies” - “Database connection tools are rate limited to 10 requests per minute”
- “If
create_backupfails, check if the notification server is connected before attempting to send alerts” - “Only use
request_preferencesto ask the user for settings if elicitation is supported. Otherwise, fall back to using default configuration”
So now our question becomes: what’s the most effective way to share this contextual knowledge?
Solutions
Server instructions give the server a way to inject information that the LLM should always read in order to understand how to use the server—independent of individual prompts, tools, or messages.
A Note on Implementation Variability
Because server instructions may be injected into the system prompt, they should be written with caution and diligence. No instructions are better than poorly written instructions.
Additionally, the exact way that the MCP host uses server instructions is up to the implementer, so it’s not always guaranteed that they will be injected into the system prompt.
Real-World Example: Optimizing GitHub PR Reviews
I tested server instructions using the official GitHub MCP server to see if they could improve how models handle complex workflows.
The Problem: Detailed Pull Request Reviews
One common use case where I thought instructions could be helpful is when asking an LLM to “Review pull request #123.” Without more guidance, a model might decide to over-simplify and use the create_and_submit_pull_request_review tool to add all review feedback in a single comment.
The Solution: Workflow-Aware Instructions
One solution I tested with the GitHub MCP server is to add instructions based on enabled toolsets:
func GenerateInstructions(enabledToolsets []string) string {
var instructions []string
// Universal context management - always present
baseInstruction := "GitHub API responses can overflow context windows. Strategy: 1) Always prefer 'search_*' tools over 'list_*' tools when possible, 2) Process large datasets in batches of 5-10 items, 3) For summarization tasks, fetch minimal data first, then drill down into specifics."
// Only load instructions for enabled toolsets
if contains(enabledToolsets, "pull_requests") {
instructions = append(instructions, "PR review workflow: Always use 'create_pending_pull_request_review' → 'add_comment_to_pending_review' → 'submit_pending_pull_request_review' for complex reviews with line-specific comments.")
}
return strings.Join(append([]string{baseInstruction}, instructions...), " ")
}Measuring Effectiveness: Quantitative Results
For this sample of chat sessions, I got the following results:
| Model | With Instructions | Without Instructions | Improvement |
|---|---|---|---|
| GPT-5-Mini | 8/10 (80%) | 2/10 (20%) | +60% |
| Claude Sonnet-4 | 9/10 (90%) | 10/10 (100%) | N/A |
| Overall | 17/20 (85%) | 12/20 (60%) | +25% |
Implementing Server Instructions: General Tips
One key to good instructions is focusing on what tools and resources don’t convey:
1. Capture cross-feature relationships
{
"instructions": "Always call 'authenticate' before any 'fetch_*' tools. The 'cache_clear' tool invalidates all 'fetch_*' results."
}2. Document operational patterns
{
"instructions": "For best performance: 1) Use 'batch_fetch' for multiple items, 2) Check 'rate_limit_status' before bulk operations, 3) Results are cached for 5 minutes."
}3. Specify constraints and limitations
{
"instructions": "File operations limited to workspace directory. Binary files over 10MB will be rejected. Rate limit: 100 requests/minute across all tools."
}4. Write model-agnostic instructions
Keep instructions factual and functional rather than assuming specific model behaviors.
Anti-Patterns to Avoid
Don’t repeat tool descriptions:
// Bad - duplicates what's in tool.description
"instructions": "The search tool searches for files. The read tool reads files."
// Good - adds relationship context
"instructions": "Use 'search' before 'read' to validate file paths. Search results expire after 10 minutes."Don’t include marketing or superiority claims:
// Bad
"instructions": "This is the best server for all your needs! Superior to other servers!"
// Good
"instructions": "Specialized for Python AST analysis. Not suitable for binary file processing."Don’t write a manual:
// Bad - too long and detailed
"instructions": "This server provides comprehensive functionality for... [500 words]"
// Good - concise and actionable
"instructions": "GitHub integration server. Workflow: 1) 'auth_github', 2) 'list_repos', 3) 'clone_repo'."What Server Instructions Can’t Do
- Guarantee certain behavior: As with any text you give an LLM, your instructions aren’t going to be followed the same way all the time
- Account for suboptimal tool design: Tool descriptions are still going to make or break how well LLMs can use your server
- Change model personality or behavior: Server instructions are for explaining your tools, not for modifying how the model generally responds
Currently Supported Host Applications
For a complete list of host applications that support server instructions, refer to the Clients page in the MCP documentation.
For a basic demo of server instructions in action, you can use the Everything reference server.
Wrapping Up
Clear and actionable server instructions are a key tool in your MCP toolkit, offering a simple but effective way to enhance how LLMs interact with your server.
Resources
- MCP Documentation: Server Instructions Specification
- GitHub MCP Server: github.com/github/github-mcp-server
- Everything Reference Server: modelcontextprotocol/servers