The most powerful AI might not be the biggest one.
On AGI vs. vertical AI, the case for going deep instead of wide, and what a small model that actually knows your domain could unlock.
There's a massive race happening right now to build AGI. Artificial general intelligence. A model that can do everything, reason about anything, handle any task you throw at it.
And that's exciting. But I keep coming back to a different question.
What if the most valuable AI for most of us isn't the one that knows a little about everything, but the one that knows everything about our thing?
AGI
Knows a little about everything. Optimized for breadth. Massive compute. Massive energy. The goal: one model to rule them all.
Vertical AI
Knows everything about one thing. Optimized for depth. Smaller footprint. Fraction of the energy. The goal: be the undisputed expert in a domain.
Large language models are extraordinary. They're also extraordinarily expensive to run. The energy cost alone is staggering. And for most business applications, you're paying for a massive general model to do a narrow, specific job.
Small language models flip that equation.
Less energy. Less compute. More specificity. Better outcomes in the domain that actually matters to you.
Let me give you two examples. One personal. One professional.
Not a chatbot that summarizes Wikipedia. A domain engine that produces outcomes. Imagine a model trained only on San Francisco — not broad, not generic, relentlessly deep. It does not "know about" San Francisco. It behaves like four different experts compressed into one system.
Ask how Japantown evolved block by block from post-internment reconstruction to today. It shows you zoning shifts, archival photos from each decade, migration patterns, business turnover, and the policy decisions that shaped it. It cites primary sources. It maps cause and effect. Context, not anecdotes.
Planning a three-day trip? It designs an itinerary based on weather patterns, seasonal fog timing, neighborhood energy at different hours, and restaurant reservation friction. It routes you efficiently. It anticipates transit delays. It adjusts in real time if the wind shifts or Ocean Beach closes. It does not recommend "top 10 attractions." It sequences experiences.
Not just ratings. It knows chef lineages, ownership changes, wine list depth, and kitchen philosophy. It understands that someone who loved Kin Khao in 2018 may prefer a different Thai concept today because the chef moved. It can recommend a tasting menu under $120 within walking distance of the Opera on a rainy Tuesday. It knows which tables are best.
It knows MUNI's actual reliability patterns by line, time of day, and season. It knows that the 38R runs hot in the morning, that the Castro to SoMa by bike is fine except on rainy Fridays, and that Caltrain timing matters more than distance for a 7pm dinner in Palo Alto. Not a routing API. Judgment built on pattern.
Now apply that same thinking to revenue operations. RevOps spans so many functions and industries. Imagine small models trained deeply on each one:
Not a general model that gives you a decent answer across all of these. Domain models that give you the answer your best, most experienced operator in each function would give. At scale. In seconds.
AGI asks: "Can this model do everything?"
Vertical AI asks: "Can this model do my thing better than anyone?"
I'm not anti-AGI. The research is important and the breakthroughs are real. But for operators, for people building and running systems today, the vertical AI path might be the one that delivers value fastest.
Smaller models. Deeper training. Less energy. More signal. Applied to the domain you actually work in every day.
We don't all need a model that can do everything. Most of us need one that's absurdly good at our thing.
What's your "San Francisco"?
What domain would you want an absurdly deep, specialized model built for?