What We Lose with AI
- Mira Nagarajan
- 29 minutes ago
- 5 min read

The more we optimize, the less we notice what’s missing
I’ve been experimenting with vibecoding recently and it’s been making me feel…weird. Trying to settle your brain when it’s shooting off application ideas to Claude must be what it feels like for pets to have the zoomies: joyous entropy. It’s clear that AI can take the tasks that we often both labor over and hate to do, and automate them accurately, and I do feel like I’ve increased my efficiency.
But I often come out of these vibecoding sessions with feelings of unease as well. Should this really be so easy? We spend so much time thinking about what we gain from AI: I wanted to know what we were potentially losing. What exactly are we trading away?
One of the biggest arguments against using AI is that it stifles learning. Anthropic noted in a recent report: “...not all AI-reliance is the same: the way we interact with AI while trying to be efficient affects how much we learn.” The study found that developers who used AI for conceptual questions scored 65%+, while those who fully delegated scored <40%. This is especially concerning as we can extrapolate these trends to show that the future of work with AI will be based on outputs, not inputs. After all, more output is in a company’s best interest for its cost efficiency.
A recruiter I spoke to recently told me that they now screen for AI usage in coding interviews because they want AI-native employees who can produce the most quickly. But often, as the Anthropic study shows, inputting is where the learning happens. In a recent interview with David Perell on AI replacing journalism, Ezra Klein argues that AI Deep Research can give the illusion of knowledge by minimizing the “struggle” of understanding. One of my favorite lines: “part of what is happening when you spend seven hours reading a book is you spend seven hours with your mind on this topic.” The struggle of not knowing, it turns out, may not be human inefficiency: it’s how true understanding forms.
There is also evidence that AI makes us, on average, less creative. This idea should have a lot of personal resonance to MBA students. At a recent student breakfast, HBS Dean Srikant Datar, when asked about what defines standout alumni, said: “Successful people are those who see things others can’t.” It pays to be different today, as AI is making us all converge to the norm.
A Wharton study recently showed that while AI can increase the number of novel ideas generated, the diversity of those ideas decreases. AI is raising the floor of individual creativity, but lowering the ceiling of collective discovery. Given research breakthroughs are often at the “long tail”, the long-term implication of using AI for research means we could become less successful at discovering innovations with outsized impact. As Royston Roberts notes in his 1989 book Serendipity: Accidental Discoveries in Science, many of the greatest scientific discoveries were byproducts or mistakes of other studies; penicillin wasn’t optimized into existence. As innovation in the last 100 years has slowed, does AI have the potential to codify this decline by bringing ideas to the median?
As AI has percolated through the workforce, there is also evidence that the same features of AI that help it make humans more efficient are also the ones that cause us the most distress. Three recent HBR articles in the last six months bring this into sharp focus: “Why AI at Work Makes Us So Anxious” identified that AI makes us feel as if we are out of control, that our work has less meaning, and introduces uncomfortable emotions, especially the replicability of our individual jobs with AI; “AI Doesn’t Reduce Work — It Intensifies It” noted that AI expands the amount of tasks expected of individual workers, blurs boundaries between work and non-work, and encourages more multi-tasking; and “AI Brain Fry” defined “mental fatigue that results from excessive use of, interaction with, and/or oversight of AI tools beyond one’s cognitive capacity.”
Crucially, as author Julie Bedard recently clarified on “Hard Fork”, this isn’t “burnout”; it’s a new form of cognitive load from the pressure to achieve perfection via AI. And we don’t quite know what the consequences of these tools will be in the coming years. More crappy memos? More mental health disorders? I struggle to see the horizon through the fog.
I’ve experienced elements of the arguments above: I sometimes give up on learning case concepts and ask AI for the answer; all of the AI-created startup websites look and sound the same now; and I fume when Claude doesn’t execute the way I want.
But what resonates with me most are the emotional arguments around AI. My deepest secret might be that I enjoy some manual work because repetition can help form judgment, whether I was learning piano, calculus, or flip cup. It’s why the first 2 years of consulting and banking are spent editing footnotes and slide formatting: to give 22-year-olds exposure to the corporate world. As our workdays get longer, these manual tasks also offer a cognitive break between focused sessions.
There’s also a sense of humility that comes from doing the most manual tasks and being a “team player.” I have both toiled over endless slides, and also gotten to speak at conferences: doing the first in part enabled the last. I fear that when we delegate everything to agents, we lose what it means to do tasks that build character precisely because they’re unrewarding. I think about the hours I spent writing my thesis senior year with my friends, and how it cemented our friendships and made me love my topic; today, with Deep Research, it could be done in seconds.
Worst of all, I’m scared that this attitude will worsen the divide between the haves and have nots. As knowledge workers automate tasks to agents, they may forget the value of treating people, especially subordinates, with kindness. In a society where inequality is increasing rapidly, I worry that we’ll forget what it feels like to do the real work and discount it relative to the “managerial” capabilities we’ll need to develop to work with AI.
I didn’t write this article to cry wolf. I wanted answers on how to maximize the benefits of AI while feeling less uneasy about all the consequences. A few of those learnings: to maximize learning, a junior software engineer told me that he consciously limits AI usage for tasks that he’s never done before. Another best practice: asking AI for inputs, not outputs. For example, I used AI to pressure test different arguments before writing this piece. And I consciously try to be kind to AI even when I’m frustrated by its inefficiency, because I don’t want to underestimate the work it’s helping me complete. Understanding the value of automation is knowing—and appreciating—the frustration and woman-hours I would be putting into it myself otherwise.
Recently, the New York Times put out a controversial quiz seeing if readers preferred AI imitations of classical writers to quotes from the real authors. The results: we seem to narrowly prefer AI, partially because it’s more straightforward and easy to read.
It brought to mind the age-old question: does making something easier make it better? Deciphering Cormac McCarthy’s language in the quiz took me several minutes of second-order thinking and deep attention vs. the AI comparable that I selected. It made me realize that with AI, we lose something else important: imperfection.
There is a joy in deciphering something you’ve never seen before, because it’s not catered to mere understanding. So to end on a 300-year-old quote from Alexander Pope: “To err is human.” Maybe we were never destined for the perfection these AI tools can give us.

Mira Nagarajan (MBA ’26) is a second-year MBA student at HBS focused on climate and energy tech. Outside of class, she can be found complaining about how cold Boston is while running along the Charles, struggling through New Yorker pieces, and trying to put down Claude Code.
