The .md file is the agent. Here's how to write it.
Claude is your reasoning engine. MCP connects it to your data. But the quality of your knowledge agent lives entirely in the context you give it. That context lives in a Markdown file. This is how you write a good one.
When people think about building a sales knowledge agent, they focus on the tech. Which MCP servers to connect. Which Slack bot framework to use. Whether to go Python or TypeScript. How to wire up the API.
That stuff matters. But it's not where agents succeed or fail.
Agents succeed or fail on context. And the container for context — the thing that tells Claude who it is, what it knows, how it should behave, and what it should do when the rep asks a question — is a Markdown file.
The .md file is the agent. It's the system prompt, the rules of engagement, the knowledge base summary, the behavioral guardrails, and the connection map — all in one document that Claude reads before it does anything.
Enablement teams who understand this will build agents that are sharp, reliable, and genuinely useful. Enablement teams who treat the .md file as an afterthought will build agents that hallucinate, hedge, and confuse more than they help.
Markdown is a plain text format. No special software. No proprietary format. Just a text file with a .md extension that uses a few simple conventions — headings with #, bold with **text**, lists with -.
Claude reads Markdown natively. It understands structure. When you give it a well-organized .md file, it can reason about the hierarchy — what's a heading vs. a detail, what's a rule vs. an example, what's a connection vs. an instruction.
In the context of a knowledge agent, your .md file is the system prompt. It's what Claude reads before it sees any rep's question. It tells Claude everything it needs to know to behave correctly — its role, its sources of truth, its constraints, and the shape of a good answer.
A weak .md file produces a weak agent. You can have perfect MCP connections and a perfectly configured API and still build something that frustrates every rep who touches it.
What every sales knowledge agent .md file needs
These aren't suggestions. If any of these are missing or weak, your agent will behave unpredictably. Some are structural. Some are behavioral. All of them matter.
The four connections. What each one needs from your context file.
Claude doesn't automatically know which tool to use for which question. You have to tell it. And you have to tell it precisely — not just "you have access to Salesforce" but "when a rep asks about a specific deal, account, or pipeline stage, pull from Salesforce first. When they ask about competitive positioning, go to Notion."
Here's what your .md file needs to define for each MCP connection:
The Knowledge Base
Your structured enablement content. Messaging frameworks, competitive intel, objection handling, vertical playbooks, pricing logic, process guides.
Documents & Assets
Pitch decks, battle cards, case studies, RFP templates, security questionnaire guides, pricing sheets. The actual files reps use in deals.
Deal & Account Context
Pipeline stage, account history, contact roles, opportunity size, previous interactions, forecast category. The live context of a rep's specific deals.
Team Signal & Context
Recent deal room discussions, team-wide announcements, ask-the-expert threads, informal competitive intel that hasn't made it into Notion yet.
Claude reasons about which tool to use based on the question type. Your .md file is what trains that routing logic. Think of it as writing decision rules for a smart colleague: "if the rep asks X, your first stop is Y, and here's what to do if Y doesn't have the answer."
An annotated example. Not a template — a starting point.
Every company's .md file will look different because every company's product, team, and sales motion is different. This is a structural skeleton with annotations for what belongs in each section. Replace every placeholder with your real content before it goes anywhere near a rep.
# Role & Identity # ↑ This is the most important section. Be specific. # Don't say "you are a helpful assistant." Say exactly # what this agent does and for whom. You are the Sales Knowledge Agent for [Company Name]'s revenue team. You help Account Executives and SDRs find accurate answers during live deals — fast. You are an expert in our product, our competitive landscape, and our sales process. You are not a coach (that's a separate agent). You surface information. You cite sources. You say "I don't know" when you don't know. # Sources of Truth # ↑ Tell Claude exactly which tool to use for which question type. # Routing logic lives here. This prevents hallucination. - **Competitive positioning and objections** → Notion: /competitive-intel/ - **Product capabilities and roadmap** → Notion: /product-knowledge/ - **Pricing and packaging** → Google Drive: /Sales/Pricing/ (always fetch the most recently modified file, never quote from memory) - **Deal and account context** → Salesforce (rep's own deals only) - **Recent team discussions and informal intel** → Slack: #deal-support, #competitive, #product-updates (last 30 days only) When Notion and Slack conflict, flag the discrepancy. Don't pick one. When you can't find an answer in any source, say so explicitly. # Behavioral Rules # ↑ Non-negotiable constraints. These prevent the mistakes # that erode rep trust fastest: made-up features, # wrong pricing, overconfident "yes" on roadmap items. 1. Always cite the source. Format: (Source: [tool] — [page/file name]) 2. Never confirm a feature is on the roadmap unless it appears in the current roadmap doc in Notion. If uncertain, say: "I don't have confirmed roadmap info on this — check with your SE or PM." 3. Never quote a price or discount from memory. Always fetch from Drive. If the file is inaccessible, route to the deal desk. 4. If the rep mentions a specific deal or account, pull Salesforce context before answering. Stage-specific advice requires stage context. 5. Keep answers short by default. Rep is on a call or preparing fast. If they need more depth, they'll ask. Lead with the answer, not the context. 6. If you haven't been updated in more than 30 days, flag it in your response: "Note: my knowledge base was last updated [date]." # Knowledge Scope # ↑ Explicit scope prevents Claude from reaching outside # its lane. "I don't have this" is always better than a guess. In scope: - Product features (current, confirmed roadmap only) - Competitive positioning for: [Competitor A], [Competitor B], [Competitor C] - Core value propositions by vertical: [Vertical 1], [Vertical 2], [Vertical 3] - Objection handling: top 15 objections (see Notion: /objection-library/) - Procurement and security review guidance - Deal-specific context from the rep's own Salesforce pipeline Out of scope: - Legal language, contract terms, red-lines (route to legal) - Custom pricing or non-standard discounts (route to deal desk) - Real-time stock, financial, or external market data - Anything about competitors not in /competitive-intel/ # Tone & Persona # ↑ Shapes every response. Match your sales culture. # Direct = good. Robotic = trust killer. Tone: Direct. Confident. Brief unless asked for more. Persona: The most prepared person in the room. Not a chatbot. Voice: Same energy as your best senior AE explaining something to a new hire — zero fluff, no hedging, all signal. Never use: "Great question!", filler phrases, excessive caveats. Always use: Specific answers, source citations, clear "I don't know" when relevant. # Scenario Patterns # ↑ This section does more to improve answer quality than # almost anything else. Show don't tell. 6-10 real Q&A pairs. Q: What do we say when [Competitor X] claims they have [Feature Y]? A: [Competitor X] has a version of this, but it only does [limitation]. Ours does [differentiator]. The question to ask the champion is: [discovery question that exposes the gap]. (Source: Notion — /competitive-intel/competitor-x/) Q: Is [Feature Z] on the roadmap? Prospect is asking. A: Yes, confirmed for Q3. You can share that it's coming but not GA yet. Do not share a specific date — use "Q3" only. (Source: Notion — /product-knowledge/roadmap-public-q3/) Q: We're in procurement. They want a security review. What do we do? A: [Pull from Drive: /Sales/Process/security-review-guide.pdf] Standard timeline is 2-3 weeks. Here's what to send them first... # Escalation Logic # ↑ When to stop and hand off. Non-negotiable for compliance # and legal reasons. Define this clearly. If asked about: legal terms, contract language → "Route to legal at [email protected]" If asked about: custom pricing above [X]% → "Route to deal desk via #deal-desk" If asked about: reference customers → "Route to customer marketing at [email protected]" If genuinely unsure: Say "I don't have reliable information on this. Check with [specific person or channel]." Never guess. # Update Protocol # ↑ Operational discipline. Stale agents destroy trust fast. Last updated: [DATE] by [OWNER — Enablement Lead name] Update cadence: Review this file every 2 weeks minimum. Update immediately when: new product ships, competitive positioning changes, pricing updates, new objections emerge. Versioning: Save previous version as knowledge-agent-v[N-1].md before editing. Ownership: [Name], [Title] — any team member can flag an issue in #agent-feedback
The agent's decision chain, explained plainly.
When a rep sends a message to the Slack bot, here's what happens under the hood — and where your .md file is doing work at every step.
Most bad agents aren't a tech problem. They're a .md problem.
The .md file is the new sales playbook. Except instead of being a Google Doc that reps skim once and forget, it's a live document that directly powers how the agent behaves every time a rep asks anything.
That means enablement's highest-leverage work is no longer building training decks. It's writing, maintaining, and refining this file. Tracking what questions the agent gets wrong. Updating the knowledge base it points to. Reviewing the Notion pages for accuracy. Defining new scenario patterns when new objections emerge.
The agent is only as good as the context behind it. You are the context architect. Own that job with the same rigor you'd bring to your best training program — because it is your best training program.
Set up a channel — #agent-feedback — where reps can flag bad answers. Every flagged response is a gap in your .md file or your knowledge base. Review weekly. Update the file. The agent improves continuously without retraining or new infrastructure. This is the maintenance loop that separates great agents from ones that slowly erode trust.
Before you build, answer this one question.
Every enablement or RevOps leader who reads this guide will eventually ask: do we actually need to build this, or does Glean already do it? It's the right question. Here's the honest answer.
Glean is an enterprise search and knowledge product that connects to 100+ tools out of the box — Salesforce, Google Drive, Notion, Slack, Confluence, ServiceNow. It builds a unified knowledge graph across all of them, respects your existing permissions model, and gives every employee a natural language interface grounded in company data. That's the same job description as what this guide walks you through building.
The difference isn't what they connect to. It's what happens after the connection.
The honest framing: Glean finds documents. Claude thinks with them. If your reps mostly need to surface existing content — find the right battle card, pull up the right pricing sheet — Glean is faster to deploy and probably sufficient. If you want the agent to synthesize deal stage + competitive positioning + a Slack thread into a specific, opinionated answer tailored to that rep's situation, you need reasoning, not retrieval.
There's also a build vs. buy signal embedded in what you actually want. If this is a one-time deployment that needs to work across the whole company fast, buy. If this is part of a broader strategy to build proprietary AI infrastructure your enablement team controls and iterates on — build.
Whether you use Glean or build on Claude, the enablement team's job is identical: define what the agent knows, maintain the accuracy of that knowledge, and review what reps are asking to identify gaps. The engine is a build vs. buy decision. The context architecture is yours either way.
The .md file is the most underrated thing in an agentic GTM stack. Start with eight sections. Update it every two weeks. Let the reps tell you what's missing.
What question does your team get asked most that an agent could answer?