Home
Operational Blueprint

Designing the
AI-Native GTM Team
& Revenue Architecture

Most RevOps teams are not failing at AI because they lack access. They are failing because nobody designed the transition. Four roles. Twelve .md files. One deliberate tradeoff.

Bola Akinsanya RevOps + GTM 12 min read

There is a version of AI adoption that looks like this: the company buys licenses, sends a Slack message about it, does a soft one hour training, and waits for people to figure it out. Six months later, a handful of people are using it well, most are asking basic questions and spending hours fixing the output, and the team lead is wondering why the investment has not paid off.

I have seen this pattern play out at every scale. And I have lived the version of it that has nothing to do with AI. I remember how hard it was to get my team to use Slack. Asana was a complete failure. They preferred spreadsheets. The tools were objectively better. The team agreed they were better. When I went on an extended vacation, I came back and they had abandoned the operating system entirely. Back to spreadsheets. Back to email threads. Back to the thing that was comfortable.

This is the part that matters more than any agent architecture or .md file design. If the team does not reach genuine proficiency with the underlying tools, the AI infrastructure will not stick. You will come back from a two-week break and find everyone right where they started.

Going AI-native is not "use Claude more." It is designing each role's agent architecture: what .md files they own, what those files connect to, what workflows they automate, and what stays human.

The gap is not talent. It is investment. Specifically: training time, structured reinforcement, and the .md infrastructure that turns casual AI usage into compounding operational leverage. Most organizations have not made the deliberate tradeoff of accepting a slower quarter now to accelerate for the next 18 months. That tradeoff is where everything starts. My POV, not my employer's position.

The Prerequisite

Tool Proficiency Before Agent Design

Before anyone designs a .md file, they need to be fluent in the tools the .md files connect to. This is not optional. It is the foundation the entire system sits on. And the sequence matters. Most organizations skip straight to "build an agent," which is like asking someone to write a novel before they have learned to type.

Prerequisite Infrastructure

Three Layers of Proficiency

1
The Knowledge Base (Notion)
Every role needs to build structured pages, databases, linked views, and templates before they can maintain the knowledge bases their agents will read from.

If your team has never used Notion, or has used it poorly (which is the same thing), this is where training starts. Not with AI. With the tool that holds the context AI needs to be useful. Each role builds databases with properties, filters, and views relevant to their domain. They learn to link between pages and databases, because that is how agents traverse related context. And they learn to maintain their section without it drifting into chaos within 30 days.

2
Prompt Literacy
Most people are still interacting with AI the way they interact with Google: type a question, get a response, manually fix it. The gap between that and building a .md file is enormous.

Before the team designs agent architecture, each person needs to understand how to write a clear role definition, how to specify sources of truth, how to write behavioral constraints, and how to give examples that shape output quality. This is a learnable skill. It takes structured practice. Daily exercises in writing clear instructions, specifying constraints, and iterating on output quality. Not a webinar. Not a lunch and learn.

3
The .md Infrastructure
Only after tool proficiency and prompt literacy are in place does the team start building the actual .md files that power their agents.

Skipping to this layer is why most adoption fails. The .md file is only as good as the knowledge base it reads from and the person who designed it. A beautifully structured deal-intelligence.md connected to a Notion workspace full of outdated, disorganized pages will produce outputs that are technically impressive and practically useless.

Someone has to own this transition. Not as a side project. Not as "the person on the team who is good with AI." A named trainer with dedicated time and explicit authority to hold the team accountable to the adoption plan. This can be an internal champion or an external resource. What it cannot be is nobody.

The Vacation Test

If the system only works when one person is pushing it, the system does not work.

The trainer's ultimate success metric is not "did the team learn AI." It is "does the team still use the infrastructure 90 days after I stop running sessions." The trainer steps back entirely for two weeks during the final month. Does the team keep using it? Does the Notion knowledge base stay current? Do the agents keep running?

If yes, you have a system. If no, you have a dependency on a single person, and you need to go back to the change management layer.

The Transition

Five Phases. Sixteen Weeks.

The transition is not a training program. It is an organizational investment with a specific timeline, specific deliverables per phase, and a deliberate tradeoff that leadership has to commit to before any of it starts.

Transition Architecture

From Tool Proficiency to Infrastructure Mode

P0
Weeks -4 to -1
Tool Proficiency
Notion + prompt literacy
P1
Week 0
The Tradeoff
Leadership commitment
P2
Weeks 1 to 2
Foundational Training
First .md file shipped
P3
Weeks 3 to 6
Assisted Reinforcement
40-60% of retrieval automated
P4
Weeks 7 to 12
Infrastructure Mode
Agents serve the org
The Architecture

Four Roles. Twelve Agents.

The AI-native RevOps team requires four roles. Not because these are the only roles in RevOps, but because these four represent the minimum surface area to cover the full revenue cycle. The Sales Leader owns how deals are won. Sales Ops is the GTM arm: how the selling motion is structured, measured, and optimized. Business Ops is the financial arm: how revenue activity translates into the language of the CFO, the board, and the operating review. Enablement ensures the team has the assets: the knowledge, the competitive positioning, and the onboarding infrastructure that makes everyone sharper.

Each role owns a distinct set of .md files. Each .md file powers a specific agent. The agents connect to the shared tech stack but serve different users and different decisions. Click any role below to see its agent architecture.

🎯
Role 01
The Sales Leader
Frontline revenue leader. Carries or directly manages a quota. Makes $50M+ in deal decisions based on gut feel and whatever the rep remembers to share.
.md
deal-intelligence.md
Powers deal strategy before every review, forecast call, and 1:1. Surfaces risk signals, activity gaps, and competitive dynamics from live CRM data.
.md
forecast-prep.md
Runs Sunday night. Assembles pipeline snapshot, week-over-week movement, coverage ratio, and risk flags before the Monday revenue call.
.md
qbr-builder.md
Quarterly business review prep from pipeline data, historical performance, territory analysis, and competitive landscape shifts.
Tech Stack
Salesforce Gong Google Drive Slack Notion
Output Consumers
Direct reports, CRO, board
⚙️
Role 02
The Sales Ops Lead
The GTM arm. Architects the sales machine: territories, quotas, pipeline reporting, CRM hygiene, and the operational infrastructure that lets sellers sell.
.md
pipeline-analyst.md
Pipeline health, conversion rates, stage velocity, generation trends. Always shows trends, not snapshots. Flags anomalies in stage concentration and ASP shifts.
.md
territory-modeler.md
Territory design, balance analysis, scenario modeling. "What if we add two reps to West?" Always shows the do-nothing baseline alongside proposed changes.
.md
comp-plan-modeler.md
Compensation scenario modeling. Cost at various attainment distributions, accelerator analysis, quota-to-OTE ratios. Always shows both the rep's and the company's perspective.
Tech Stack
Salesforce Google Sheets Notion Google Drive
Output Consumers
Sales leadership, finance, CRO, board prep
📊
Role 03
The Business Ops Lead
The financial arm. Bridge between the GTM organization and the CFO's office. Translates revenue activity into recognized revenue, budget variance, and investor-facing metrics.
.md
operating-review-prep.md
Weekly and monthly operating review package. Pulls from sales, marketing, CS, and finance. Follows the company's existing template exactly. Accuracy is non-negotiable.
.md
data-reconciliation.md
Cross-system data validation. Compares Salesforce, finance, and CS platform metrics daily. Catches discrepancies before they hit a stakeholder meeting.
.md
board-deck-builder.md
Quarterly board deck revenue section. Executive summary, financial performance, pipeline health, risks, and board asks. Every number needs its comparison point.
Tech Stack
Salesforce Google Sheets Notion CS Platform Marketing Auto
Output Consumers
CEO, CFO, board, CRO, VP Marketing, VP CS
📚
Role 04
The Enablement Lead
The assets arm. Makes the sales team sharper. Owns the knowledge, competitive intel, onboarding, and the content sellers use in live deals.
.md
knowledge-agent.md
The primary rep-facing agent. Surfaces competitive positioning, product knowledge, and objection handling in real time during live deals. The 8 critical attributes from the earlier essays.
.md
onboarding-agent.md
Specialized for new hires. Assumes the person knows nothing about the company. Explains acronyms, links to curriculum, tracks frequently asked questions as gap reports.
.md
competitive-intel.md
Real-time competitive positioning. Connected to live sources, not quarterly battle card updates. Date-stamps every competitive claim. Acknowledges competitor strengths honestly.
Tech Stack
Notion Google Drive Salesforce Gong Slack
Output Consumers
All AEs and SDRs, sales leadership, product marketing
The .md File

What the Agent Brain Looks Like

Every .md file follows the same structural pattern. The content changes per role. The architecture does not. Here is the skeleton for the Sales Leader's deal intelligence agent, the one that gets used most frequently, to show how the pieces fit together. A RevOps leader could take this, replace every placeholder with their company's specifics, and have a functioning agent by end of week.

deal-intelligence.md
# Role & Identity

# Be specific. Not "you are a helpful assistant."

You are the Deal Intelligence Agent for [Company]'s
sales leadership team. You analyze pipeline data, deal
progression, and competitive dynamics to surface risk,
opportunity, and recommended next actions. You serve
the frontline revenue leader and their direct reports.


# Sources of Truth

# Routing logic. This prevents hallucination.

- Deal stage, close date, amount, activity → Salesforce
- Call summaries, sentiment, objections → Gong
- Account plans, mutual action plans → Google Drive
- Rep commentary, customer signals → Slack deal rooms
- Competitive positioning → Notion: /competitive-intel/


# Behavioral Rules

# Non-negotiable constraints.

1. Never forecast a deal. Surface signals. Leader decides.
2. Flag: close date moved more than once.
3. Flag: missing confirmed economic buyer contact.
4. Flag: last meaningful activity > 10 days ago.
5. Always pull fresh Salesforce data. Never cached context.
6. If rep narrative conflicts with activity data, note it.
   Do not pick sides.


# Scenario Patterns

# Teach the agent what good looks like.

Q: "What's the risk in this quarter's commit?"
A: Pull all commit deals, flag those with movement risk
   (stage regression, missing contacts, stale activity).
   Summarize: "X deals totaling $Y have risk signals."

Q: "Prep me for my 1:1 with [Rep]."
A: Pull pipeline, recent closed/lost, current attainment,
   and flagged deals. Format as a briefing.

The same structure applies to every .md file across all four roles. The pipeline analyst .md has different sources of truth and different behavioral rules, but the same sections in the same order. The board deck builder .md connects to different systems, but follows the same architectural pattern. Consistency is what makes the system maintainable. When any .md file has the same skeleton, anyone on the team can read, review, and improve any other role's agents.

The Compounding

The Flywheel Nobody Sees Coming

The real leverage of this architecture is not any single agent. It is how the four roles compound. Enablement makes reps sharper, which improves deal quality for the Sales Leader, which improves pipeline accuracy for Sales Ops, which improves forecast quality for Business Ops, which improves board confidence, which funds more investment in the team. Each agent's output feeds the next agent's context.

Compounding Architecture

How Four Agents Become One System

Revenue System ENABLEMENT Effective GTM SALES LEADER Better Deals SALES OPS Sharp Execution BUSINESS OPS Precise Forecast reps close better deals feed pipeline pipeline feeds forecast confidence funds team
12
Agents across 4 roles
16wk
From zero to infrastructure
40-60%
Retrieval automated by week 6
90d
Vacation test at day 90
The Experiment

Start With Four People

If you are a RevOps leader reading this and thinking "I cannot do this with my whole team at once," you are right. Design an experiment. Pick one person from each of the four roles. Give them the training investment described above, starting with the tool proficiency phase. Assign a trainer. Set a 16 week timeline.

Measure

Time to Report

How long does it take to assemble the weekly pipeline report, the operating review, the board deck? Measure before. Measure after. The delta is usually 60 to 80% reduction.

Measure

Decision Prep

How long does the Sales Leader spend preparing for a deal review or a 1:1? Before: 30 to 45 minutes per rep. After: a briefing generated in 90 seconds that surfaces what matters.

Measure

Data Accuracy

How many data discrepancies surface in meetings vs. being caught beforehand? The reconciliation agent alone typically eliminates the "wait, that number is wrong" moment from the operating review.

The experiment is not "did AI help." The experiment is "does a team with purpose-built .md infrastructure, genuine tool proficiency, and a structured transition outperform a team that is using AI ad hoc." The answer, in every case I have seen, is that it is not close.

And then run the vacation test. The trainer steps back entirely for two weeks during the final month. Does the team keep using the infrastructure? If yes, you have something that compounds. If no, go back to the change management layer. That is not a failure. That is information about where the real work needs to happen.

The Bottom Line

The tools exist. The models are capable. The bottleneck is organizational: the willingness to invest training time, accept a slower quarter, and design the transition instead of hoping people figure it out. The team that does this work now will be operating at a fundamentally different level 18 months from now. Everyone else will still be asking Claude questions and spending hours revising the output.