Will Generalists Triumph in an AI World?
- Palak Raheja
- 1 hour ago
- 5 min read

Many of you have likely come across David Epstein’s Range: Why Generalists Triumph in a Specialized World. Epstein argues that, in an era that rewards adaptability, breadth of thought, and cross-domain synthesis, generalists often find creative solutions that specialists might overlook.
But in a world increasingly shaped by AI, where specialized models outperform humans in narrow domains, and expertise can be outsourced to algorithms, what happens to the generalist edge?
This question hit home for me recently. Equipped with a free Perplexity account in a recent professional setting, I was able to complete in less than half the time a task that, in my consulting days, would’ve taken days or even weeks of time, effort, and budget. I’d been thinking about the AI threat for a while, but this was the moment it stopped being theoretical. Many MBA students came from (and are heading into) consulting, investing, and other generalist-heavy fields. So this isn’t an abstract debate. It’s about whether the attribute many of us quietly count on as our differentiator is… still a differentiator.
Epstein’s core argument is that the modern world is full of what he calls wicked environments: places where rules are fuzzy, feedback is slow or misleading, and the same problem rarely shows up twice. Think markets, geopolitics, (corporate) strategy, etc. In those worlds, breadth and the ability to connect ideas across domains matter more than ultra-narrow specialization.
But here’s the uncomfortable part: some of the moves generalists pride themselves in are exactly what large language models are getting good at. Give a model a new industry, and it will happily summarize earnings calls, map the competitive landscape, and draft a halfway decent memo all before I would’ve finished a cup of their cold brew.
And the baseline skill of learning any industry quickly and making the associated deck is getting commoditized. If every junior consultant and associate can pair a basic finance toolkit with an AI copilot that digests 10-Ks and news feeds, that’s no longer your edge. That’s the starting point.
So yes, AI is nibbling at the Range story in more ways than one.
So… what actually still needs a human generalist?
The good news is that AI is far less impressive once you leave the neat world of answering this question and enter the messier world of asking, “what is the question?”
Three things still feel deeply human and somewhat aligned with what true generalists do well:
Clarity of thought: Models are incredible once the problem is well posed. However, they’re much worse at deciding which questions matter, whose incentives are at stake, and what success really looks like in an actual organization. That kind of problem-framing in wicked environments is exactly where generalists shine, allowing them to draw on varied experiences, spot analogies, and avoid the trap of one mental model.
Stitching together humans and machines: Generalists (whether CEOs or cross-field researchers) tend to be good at integration. They pull together different technologies, teams, and markets into coherent bets. In the AI era, the definition of a team now includes human specialists, generalist leaders, and specialized models. Someone must design that system, decide where AI is trustworthy, and reconcile conflicting inputs. That’s integrator work.
Intangibles and judgment: A recent report from the AI Workforce Consortium (Cisco, Google, Microsoft, etc.) found that 78% of IT roles now require AI skills and that companies are facing critical shortages in AI ethics, security, and governance. It also notes that human skills like communication and leadership are becoming more important as AI spreads. In other words, organizations don’t just need people who can use AI. They need people who can be accountable for how it’s used. These aren’t problems you solve with an extra model. They’re about context, trade-offs, and values. Breadth is an advantage here, not a liability.
That said, the bar for being a pure generalist has definitely been raised. Across industries and job postings, you can see the shift toward what people call the T-shaped professional: someone with real depth in at least one area and broad, cross-functional skills layered around it. And now with AI in the mix, that shape evolves again. Depth can’t just mean you’ve dabbled in a topic; it has to be deep enough that you can actually evaluate and steer AI in that domain, not just rubber-stamp whatever it spits out. And breadth isn’t just being well-read or good at frameworks; it now includes AI literacy — or knowing where these tools shine, where they break, and how to design workflows that mix humans and models without drowning the team in AI-generated work that looks impressive but says nothing.
The job market is already rewarding these hybrids. The people earning the premium in AI-related roles tend to be the ones who were already genuinely good at something (design, analytics, product, finance, etc.) and then layered AI fluency on top of that. It’s not about being a generalist with a shiny toolkit; it’s about being someone who can anchor a discussion with expertise and then widen the lens with the help of intelligent tools.
So what does all this mean if you identify as a generalist? If your pre-HBS elevator pitch was something like, “I’m a smart, curious generalist who likes hard problems,” the bar is moving. The old edge of the ability to get smart on anything quickly just isn’t the differentiator it once was. The new edge is the ability to frame the right problem, bring together the right humans and the right models, and own the consequences. In practice, that means picking at least one area where you’re willing to go deep enough that others truly trust your judgment. It means treating AI less like magic and more like a slightly chaotic junior team member whose work needs to be checked, guided, and sometimes ignored. And it means leaning into the classic Range strengths — curiosity, cross-domain thinking, pattern recognition — but pointing them toward designing human-AI systems, not competing with AI at commoditized tasks and skills.
And that brings us back to the core question: will generalists still triumph in an AI world? If by “generalist,” we mean someone who’s good at making decks, then probably not. As we know, AI already does a lot of that work, and it does it faster and cheaper. But if we’re talking about the kind of generalist Epstein celebrates — broad, curious, context-sensitive people who thrive in wicked environments — and we combine that with real depth in at least one domain and a working fluency in AI, then the answer looks very different. That version of the generalist doesn’t just survive the AI era. That might be exactly the person everyone is trying to hire.

Palak Raheja (MBA ’26) is originally from Lucknow, India. She graduated from Lady Shri Ram College, University of Delhi, with degrees in Statistics and Economics. Prior to HBS, she worked at Bain India, and in the Indian consumer and health-tech start-up ecosystem. In her free time, she can be found reading, running, or watching movies.





