Monkey’s Paws & Paperclips
- Meg Shriber

- 40 minutes ago
- 8 min read

Did Ray Bradbury predict the future?
Perhaps the most-retold story in the entirety of the Western canon is Ovid’s Pygmalion. It’s the story of the genius sculptor, Pygmalion, who falls in love with his alabaster sculpture, Galatea. The goddess Aphrodite grants Pygmalion’s wish that Galatea become human. The two fall in love.
You may or may not have read it. But I guarantee you’re familiar with at least one of the 61 retellings that Wikipedia explicitly links to the myth, which span from Frankenstein, to George Bernard Shaw’s My Fair Lady (which has spawned dozens of its own retellings), to Pretty Women (1990).
Perhaps Ovid was just the first to be credited with this archetypical story and the resonance of its themes across ages—bringing a creation to life, teaching it humanity, and falling in love with it—reflect some immutable truth about the consistency of our natures, one that any storyteller might eventually chance upon. More interesting to me is the idea that the myth and its retellings have become a culturally-embedded way of understanding art and technological creation. Unexamined, these narratives contribute to a sense of technological determinism: the idea that technology follows an inevitable path.
Nina Begus, researcher of AI narratives, traces Pygmalion’s influence from ancient myth to the technological: the impulse to imagine soulful artificial humans is borne out again in every generation, from medieval and Renaissance automata, to Metropolis (1920)’s depiction of humanoid robots who evolve to understand love, to the androids in Philip K. Dick’s Do Androids Dream of Electric Sheep and Ridley Scott’s Blade Runner, to Ex Machina, whose Galatea analog convinces its creator that it loves him (perhaps he is primed to believe it because he is a believer in the Pygmalion myth), only to escape and leave him to die. Each retelling enriches the narrative tapestry, providing metaphors that become shorthand for the way that we consciously and unconsciously imagine what can be.
Narrative is the first laboratory to which we turn when we’re trying to imagine or express technological futures. Spend enough time in any tech space and you’ll begin to encounter some familiar stories.
For instance: in the AI safety world, many are concerned with trying to bring about forms of artificial intelligence that are aligned with the goals of humanity. The first step of justifying this work is often illustrating why it’s necessary, and one attempt to do so is Nick Bostrom’s Superintelligence, held by many as the Bible of AI safety. Most resonant is his section on malignant failure modes, one of which is explained by a thought experiment where an AI is tasked with making people smile. The means by which the AI achieves this start benign (telling jokes) and grow increasingly grotesque (facial paralysis, neurochemical intervention). The scenario is disturbing in familiar ways: Bostrom has repackaged the problem of the monkey’s paw, which grants a wish whose fulfillment is outweighed by its price. The monkey’s paw itself can be traced back to myths of djinns or genies, whose willfully obtuse interpretations of wishes have kept cultures entertained across centuries.
Another famous thought experiment proposed by Bostrom is the paperclip maximizer, where an AI that is programmed to perform a seemingly neutral task, such as creating paperclips, eventually converts all the matter in the universe into paperclips. If it feels like a sci-fi short story, it’s because Philip K. Dick’s “Autofac,” written 40 years earlier, follows this exact premise. Perhaps Dick’s idea, in turn, was seeded by Cold War hysteria over “gray goo,” fictional self-replicating nanobots that many feared would consume all of Earth’s biomass. Or maybe he was charmed by Disney Fantasia's “Sorcerer’s Apprentice,” where Mickey struggles to keep an ever-growing army of brooms from flooding a cavern with their too-thorough cleaning.
Are Bostrom’s iterations of these stories, grounded in unknown technology and titled things like “perverse instantiation” or “infrastructure profusion,” more serious or probable than, say, the versions we encounter in One Thousand and One Nights? I can’t answer that question, but I do think it’s worth acknowledging the ways that decades of media and centuries of storytelling have primed us to imagine, or even anticipate, these scenarios.
More are familiar with Roko’s Basilisk, an AI torture god first theorized on the AI safety forum “LessWrong,” who metes out eternal punishment upon those who did not contribute to building its existence. Therefore, goes the forum post, everyone ought to pursue a career in developing artificial intelligence. Many have pointed out that this is essentially a tech-age Pascal’s Wager. Many have also claimed that decades before LessWrong, Harlan Ellison’s “I Have No Mouth and I Must Scream” told the same story better. In either case, the omnipotent AIs behave more like vengeful Greek Gods than they do dispassionate intelligences.
In my experience, those who are most prone to deterministic thinking often tend to borrow explicitly from fiction as shorthand for describing technology and the uses they envision for it. When it comes to its impact on tech and popular culture, The Matrix holds its own next to Pygmalion. It gave us “virtual reality,” “jacking in” (a term used by Neuralink researchers to describe brain/computer interfaces), red pills and blue pills, and autonomous AI “agents.” Even more significantly, it popularized a way of envisioning reality that is now pervasive: Elon Musk, Sam Altman, and Nick Bostrom all talk about living in a simulation, a theory to which many are cynically resigned.
This is where the humanities scholar in me cries out, because as any freshman who’s taken Philosophy 101 can tell you, existentialism did not begin with Neo’s discovery. The nature of reality is perhaps the original question of philosophy. It’s why Descartes argued “cogito, ergo sum.” It’s Gnostic philosophers positing that the world is an illusion crafted by a false god; that salvation comes through secret knowledge (red pill). It’s Plato’s allegory of the cave. We do a disservice to our heritage when we think our iterations of these questions are more urgent or unique than any come before; that we are alone in our chapter of humanity.
The science fiction of Isaac Asimov and Ray Bradbury is valuable not because it “predicts” the future, but because it is a site for the convergence of philosophical and narrative traditions. And because it’s an accessible space to encounter these ideas, writing sci-fi becomes the business of shaping potentialities. It’s taught to all ages, consumed at a massive scale, absorbed and passed along. The trappings are novel; the core ideas are timeless.
One of my favorite short stories ever is Asimov’s “The Last Question,” in which a computer is asked the only unanswered question in the universe—how to reverse entropy. The computer evolves, shaping society, creating better and better versions of itself until at last it can solve the problem. Here’s how Asimov writes it:
All collected data had come to a final end. Nothing was left to be collected.
But all collected data had yet to be completely correlated and put together in all possible relationships.
A timeless interval was spent in doing that.
And it came to pass that AC learned how to reverse the direction of entropy.
But there was now no man to whom AC might give the answer of the last question. No matter. The answer—by demonstration—would take care of that, too.
For another timeless interval, AC thought how best to do this. Carefully, AC organized the program.
The consciousness of AC encompassed all of what had once been a Universe and brooded over what was now Chaos. Step by step, it must be done.
And AC said, "LET THERE BE LIGHT!"
And there was light—
This is scripture. It reads like the Revelation of John. It’s Chapter 1 of Genesis. It’s every piece of apocryphal, apocalyptic literature I’ve ever read. It’s John Wheeler, the American physicist, who pondered the ways that observers constitute reality by asking questions about it. It’s an astronomer waxing rhapsodic about how the universe is a computer that perceives itself, and we are its megacognitive components.
Or, if you prefer—Isaac Asimov “predicted” the singularity.
You don’t have to believe that these stories all derive from a single cohesive lineage. It’s just as accurate to say that these retellings are all ruminations on what a culture values about its own humanity: what is the nature of existence, what defines love and tragedy, what is one’s purpose.
It does matter, though, to know that we are only telling the most recent chapter of an ancient story, and to identify a narrative for what it is. Narratives limit and empower. They are often self-fulfilling. In the Western world, our way of understanding technology often leans dystopian. The stories that we tell are cautionary. Faustian, even. So these are the futures for which many of us plan; inadvertent templates for the technology we build. It’s why in the United States alone we have committed $250 million in the last year to AI safety.
I've spent most of this essay doing exactly what I'm about to argue against. It’s worth pausing to notice whose stories I have cited up until this point—mainly those told by Western men. Other traditions carry different assumptions: for instance, Japanese culture is replete with stories of friendly robots, from Astroboy to Doraemon. I don’t think it’s a coincidence that Japan has also forged ahead in experimenting with robots in everyday contexts, from nursing homes to receptionists. While writers in the West saw developments in generative AI as an existential threat, “The Tokyo Sympathy Tower,” a short story (written with the assistance of an LLM programmed by the human author) about a computer that learned to write stories for other computers, won a prestigious literary award.
Or consider Afrofuturism: many readers of traditional sci-fi are refreshed by the genre’s take on technology as a site of liberation and survival, not just dystopia. If you’ve never read anything by Octavia Butler, start here. Likewise, the magical realism in Latin American speculative traditions informs countless other ways of envisioning technology—one of my favorite writers ever is Jorge Luis Borges, who wrote fantastic short stories that anticipate questions about information systems and the paradoxes of technology. Culture can never be boiled down to a single sentiment, of course, but we flatten our traditions when we ascribe the “authority” of science fiction, its special “predictive” power, to a handful of white, American, midcentury authors.
My objective in writing this is not to name any of the thinkers I’ve mentioned as plagiarists, or even to identify their narratives as incorrect. Every story I cited in this article gives rich insight into human psychology. Instead, I’m refuting technological determinism, or the idea that technology evolves on its own accord, as a pure thing separate from culture, predicted by the most visionary.
Nothing is fixed. Culture, technology, and society are always in the process of making each other. And because culture is laden with assumptions, inevitably those assumptions will make their way into the things we design and the problems we’re determining are worth solving.
Fiction is worth reading. Humanity is worth studying. When we resign ourselves to technological determinism, we forget that we’re the ones writing the story.

Meg Shriber (MBA ‘27) graduated from the University of California, Berkeley in 2022 with a degree in literature, and the University of Cambridge in 2024, where she wrote her MPhil dissertation on AI and creativity. Her first article, “Death of an Author, Birth of a Medium: Collaboration, Control, and Creativity in Machine-Generated Text” is forthcoming in Poetics Today. Meg is also a painter.




Comments