Generalist Humans, Specialist Tools

aidebatable

The future belongs to humans who have the range to see the whole board, but who build tools that can hit a target the size of a coin.


I've read David Epstein's Range. It argues that in a wicked, complex world, generalists triumph over specialists. They have the context, the breadth, and the agility to survive where specialists get stuck.

That advice is perfect for you, the human. But if you are building software in the age of AI, you must do the exact opposite.

The Law of the Big Model

Sam Altman once said that whoever's product benefits from big models being smarter is on the right side of history.

This implies a brutal corollary: Your tool should never try to be a better generalist than the model.

A year or two ago, we saw a Cambrian explosion of "AI recommendation" sites. Movie recommenders, book finders, general "explain it to me" wrappers. They are all dead or dying. Why? Because I can just ask Gemini.

Gemini doesn't need your wrapper to recommend a movie. It understands the nuance of my request better than your logic ever could. No new product will ever outcompete these big machine learning models at being generalists.

The VibeCode Trap vs. The Ralph Wiggum Solution

I recently considered building a "VibeCode cleanup solution" to tidy up the messy code AI generates. But I stopped.

How long until OpenAI or Google just makes a tiny optimization to their model that cleans up its own code? A couple of months? If I build a general "cleanup" tool, I am betting against the model getting smarter. I will lose.

Instead, look at the rise of tools like Ralph Wiggum. This isn't a generalist wrapper. It is a hyper-specialized loop. It doesn't just "vibe" with the code. It runs a persistent, agentic loop that forces code to pass tests before committing. It avoids "vibe coding garbage" by specializing in the verification process rather than the generation process.

The Manus Lesson

Look at Meta's acquisition of Manus. Meta didn't buy them because they built a better generalist model than Llama. They didn't try to out-reason Gemini. They were acquired because they became incredible specialists in creating agents. They focused on the action rather than the intelligence.

The Departmental Fallacy

We are entering an era where Generalist Humans write Specialist Tools.

I recently interviewed CEO and Tech Leader Kathy Slowinski. She told me that "departments are a constructed fallacy."

Her point is that the traditional silos of "Sales" or "Marketing" or "Engineering" are dissolving. These divisions only existed because the skills required to execute them were too specialized for one person to hold. AI bridges that gap. We are moving toward generalist operators who can traverse all these domains.

But this is the critical distinction: these generalist operators should not build generalist tools.

If you work in sales, do not try to build "SalesGPT." Salesforce or OpenAI will release a GPT-6 trained on every enterprise interaction in history, and your generalist wrapper will be obsolete instantly.

Instead, you combine GPT with maybe FireCrawl and utilize the capabilities of them both to write a tool that is specialized for one particular action.

That is exactly what we tried to do. We didn't build a framework. We didn't try to solve "Sales." We built a specialist integration that does one specific thing for our pipeline.

It serves only as a proof of concept, but the lesson stuck. As I look at the result, I am starting to be more and more sure of this approach. The future belongs to the humans who have the range to see the whole board, but who build the tools that can hit a target the size of a coin.