Bola Akinsanya
⚡ RevOps Perspective

The most powerful AI might not be the biggest one.

On AGI vs. vertical AI, the case for going deep instead of wide, and what a small model that actually knows your domain could unlock.

There's a massive race happening right now to build AGI. Artificial general intelligence. A model that can do everything, reason about anything, handle any task you throw at it.

And that's exciting. But I keep coming back to a different question.

What if the most valuable AI for most of us isn't the one that knows a little about everything, but the one that knows everything about our thing?

🌐

AGI

General Intelligence

Knows a little about everything. Optimized for breadth. Massive compute. Massive energy. The goal: one model to rule them all.

🎯

Vertical AI

Domain Intelligence

Knows everything about one thing. Optimized for depth. Smaller footprint. Fraction of the energy. The goal: be the undisputed expert in a domain.

Large language models are extraordinary. They're also extraordinarily expensive to run. The energy cost alone is staggering. And for most business applications, you're paying for a massive general model to do a narrow, specific job.

Small language models flip that equation.

Energy cost
LLM
massive
SLM
Breadth of knowledge
LLM
wide
SLM
Depth in domain
LLM
SLM
deep

Less energy. Less compute. More specificity. Better outcomes in the domain that actually matters to you.

Let me give you two examples. One personal. One professional.

🌉 A model that knows San Francisco

I love this city. The history, the neighborhoods, the layers that only someone who really knows SF understands.

Now imagine a small language model trained deeply on just San Francisco. Not a general model giving you Wikipedia answers. Something that knows where the original cottages are in Parkside. The full history of every restaurant that's turned over on Clement Street. Everything about Ocean Beach, from the geology to the rip currents to why surfers sit at certain breaks in certain seasons. Every engineering detail of the Golden Gate Bridge. The evolution of each street block by block, decade by decade, with the historical photos to match.

All of that trapped into one model. Not a chatbot that gives you a decent overview. Something so nuanced it feels like talking to a lifelong local who also happens to be a historian, an urbanist, and a food critic. That's not AGI. That's vertical intelligence. And it would be incredible.

⚙️ A model that knows RevOps

Now apply that same thinking to revenue operations. RevOps spans so many functions and industries. Imagine small models trained deeply on each one:

🎯 Sales Enablement: Knows which content converts at each stage, how top reps sequence outreach, and what coaching patterns actually move win rates
📈 Forecasting: Understands pipeline velocity patterns across SaaS, fintech, enterprise. Knows the leading indicators that general models miss entirely
🔧 Tools + Systems: Deep knowledge of CRM architecture, integration patterns, data hygiene. Not generic documentation. The tribal knowledge of someone who's migrated 50 orgs
📊 Sales Operations: Territory design, comp modeling, quota setting. The nuances that differ between a 20-person sales team and a 2,000-person one
📞 CS + Call Center Ops: Handle time optimization, escalation routing, churn signal detection. Trained on millions of interactions, not general language
💰 Finance + Reporting: Revenue recognition patterns, billing edge cases, the gap between bookings and cash that keeps finance and ops teams up at night

Not a general model that gives you a decent answer across all of these. Domain models that give you the answer your best, most experienced operator in each function would give. At scale. In seconds.

AGI asks: "Can this model do everything?"
Vertical AI asks: "Can this model do my thing better than anyone?"

I'm not anti-AGI. The research is important and the breakthroughs are real. But for operators, for people building and running systems today, the vertical AI path might be the one that delivers value fastest.

Smaller models. Deeper training. Less energy. More signal. Applied to the domain you actually work in every day.

We don't all need a model that can do everything. Most of us need one that's absurdly good at our thing.

💬

What's your "San Francisco"?

What domain would you want an absurdly deep, specialized model built for?