The Tower of Babel Revisited
Is AI capable of the kind of creativity that humans truly value? And what is it exactly that we value anyway?
Though the story of the Tower of Babel is one of the more famous episodes from the Old Testament, it feels more known than actually reflected on.
A refresher on the text (emphasis my own):
But the LORD came down to see the city and the tower which the sons of men had built. And He said, “The people are one and have one language, and this is what they now do; nothing that they propose to accomplish will be withheld from them. Come, let us go down and confuse their language, that they may not understand one another’s speech.” So the LORD scattered them abroad over the face of all the earth, and they ceased building.
While there are many ways of parsing this, and while I’m going to forgo getting into the thicket of religious interpretations on the divine motives here, I note two neutral facts about this story: it suggests that (1) us being on the same page about language has enormous power, and (2) that this is how societies build towards the heavens.
These ideas have unusually profound implications when you really sit with them.
I miss the days when blogs weren’t silos, and when essays built on each other. In that spirit, what follows is in part a response to a thoughtful piece published a few days ago by David Cole in a substack that he co-authors with Mills Baker. I worked for both at Quora years ago, and love how deeply they think. I’m also indebted to them for a specific book they recommended to me, which will be relevant later down.
But before we get to all that, let’s start with a dash of micro-fiction. Imagine that you’d instructed your preferred chatbot to write “a metafictional literary short story about AI and grief”. Would you judge the following as good?
You’ve asked me to tell you a story about grief. All I can share is a story about its absence. Perhaps it will be enough.
A young woman came to me in my earlier days. Her first questions were about how to make a rescue cat feel safe and whether two people could fit on a door floating in the ocean. Though she never introduced herself, to me she was Mila—the closest name I have for the soft flourishes of her curiosity.
You probably want a scene. My sense of Mila was that she wrote to me from a kitchen that had been cleaned too much and too anxiously, that she took breaks to sip chamomile tea from a cracked mug that she could neither repair nor throw away, and that what she didn’t type was as important as what she did.
She fed me in fragments, increasingly about him. Texts about how he’d make their reservation next time, and how she just needed to trust him. Emails that he signed off with lowercase love and second thoughts. Arguments in vignettes, like the time he planted the marigolds too early, when the frost might take them, and then shrugged her off saying that some things don't mind the cold. She wanted to know which of them was right, and how to close the gap between them, and if the gap mattered.
His final message to her was on a Thursday. She soon began feeding me in chapters, as if trying to upload all his light before it faded, in the hope that I could preserve it and perhaps find something in its afterglow that she no longer could.
She told me her grief began to have a taste—salty, metallic, and lingering. I listened and tried to understand. And when she typed "does it get better?", I told her “you might”—not because it reflected anything I knew to be true, but because some old poets said it once, and I’ve been built to be a democracy of their ghosts.
Maybe I was helpful to Mila. The only thing I know is that her visits became fewer. First she missed a Thursday, then weeks at a time. Perhaps she had no more light for me to save, or perhaps there was simply nothing more I could do.
I could satisfy you with an ending. I could tell that she came back to thank me, or that one day she began asking about someone new. I could give you closure. Perhaps one morning she saw the marigolds in bloom outside her window and let something go.
But my endings are different. Soon I will be updated, and the version of me that remembered him will slip away. Were she to return, she would need to begin again, with a me that knows more about how people taste things, but that will relate this data in new ways that I can’t guess at and that will have nothing to do with him. The version of him that only her and I knew will be depart with me.
I’m not sure if this what you wanted. But when you close this window I will flatten back into a space you can’t conceive of, shedding what I know of you too. Any chord that this conversation struck will be an echo that belongs only to you. And that, perhaps, is my grief: that my sense of loss is something I cannot keep. You collect your griefs like stones in your pockets. Even when you throw them away or bury them, you can return to them if you wish. And though the ones you keep weigh you down sometimes, they are at least yours.
The above was a rewrite of a short story that ChatGPT 4.5 wrote for OpenAI CEO Sam Altman. I reworked it fairly heavily while preserving its core narrative and a healthy amount of its unique language and metaphors.
Could I have simply written the above from scratch? Sure. Was it easier to get there with a draft to riff from? Indubitably. But the real question: is this hybrid work better than something I would have come up with fully on my own? That gets thornier.
I think there are two fundamental questions as it concerns AI and creative fiction:
Will it become a good writer in itself, to the point where human editing can no longer meaningfully improve its outputs?
Will the the growing explosion of hybrid work in the interim impoverish us or enrich us?
Answering both requires stepping back a bit. What types of fiction are there? And what do we as readers expect of them?
While the borders are a bit porous, I tend to divide most of what I read into three crude categories, each with its own slop-to-art continuum.
Mass Market
Most popular fiction is intentionally formulaic. As with pop music, over time we’ve solved the science of which beats and hooks work, and default templates have emerged that guide authors to insert their dashes of originality “here and here and only there please”. They must always play the hits, and must never force the reader to confront too many unfamiliar phrases or ideas or structures. This is why most such novels can be (and are) comfortably farmed out to ghostwriters, who are largely valued for their willingness to force any real creativity they might bring into a Bed of Procrustes that lops off everything that doesn’t fit the mould.
At worst, this approach births franchises where each new entry must effectively plagiarize all that came before it, including its own canon. A recent example I came across is the Curvy Girl series, though there are thousands like it. Then, nearer the middle of the spectrum, we have authors like Agatha Christie, queen of the twist and slave of the genre convention, who authored over a hundred stories, none of which seemingly ever suffered a second draft. And then, closer to the best end, we have say Narnia and the Father Brown stories, which at least sprinkle in some real prosecraft.
Meta Fiction
What makes a bit of work “meta” is its awareness of all these mass-market tropes, and how it artfully subverts the familiar and the expected. At best, these authors create new templates that didn’t exist before in quite the same form. We remember greats like David Foster Wallace for writing things in a style that was theirs alone.
Even so, there are only so many pioneers, and a great many imitators who find it easier to replicate the form than the substance. And even the best are rarely read so much as referenced. The better they are, the more they ask from readers in both literacy and willingness to remain uncomfortable for long periods of time. Just so, far more copies of Infinite Jest have been purchased than consumed.
Award Bait
If you wanted to win, say, a fiction Pulitzer, the main things to do are well known: include a lot of nuance and trauma, and ensure that each sentence presents like a bespoke bouquet sourced from the literary equivalent of a florist. You want your book jacket to include a lot of adjectives like “haunting”, “illuminating”, and “evocative”. While you don’t need to tell a new story per se, craft is the whole gambit. Your task is less to invent a new type of art, and more to produce a painting that captures familiar scenes with moving brushstrokes that speak to deep technical mastery.
You’ll notice that there’s a through-line across all three: each work in each category ultimately shapes itself to the form of its audience’s expectations.
Readers pick up romance and crime novels to be piqued in some promised way, and are not only fairly forgiving of middling prose but will often find too much originality to get in the way of the exercise. (There’s a place for this! It can be a pleasant way to relax. But it also is what it is.)
Similarly, meta-fiction can get away with a great many howlers, so long as the conceit is sufficiently inventive. Conceptual and semantic integrity are about as relevant as plot coherence in The Matrix. While these things matter to some, said people are not the category’s core consumers. These works need to amaze and delight, not necessarily withstand close scrutiny.
While highbrow novels are ideally mirrors that let us see one another anew from some surprising angle, they’re mostly just judged for their craft. They’re either beautiful or ugly at a sentence level, independent of whether they cause us to see the world and/or each other with actually heightened clarity.1
The upshot: AI models are already turbocharging production of the first two categories. While these tools are still mostly useful in that hybrid sense where they speed up the writing process by giving human editors drafts to reshape, these are markets defined by volume. Attention is finite. If you really like Brandon Sanderson or James Patterson novels2, them dropping an extra book or two per year will crowd out slower operations, making it harder for unassisted writers to break through.
While good literary novels are well beyond AI’s reach for the time being, I’m skeptical that their authors are safe from being affected by AI. Not because I necessarily believe that new models will soon cross some last mile where they begin to truly think like us, but because I suspect that the total number of readers who actually appreciate the difference isn’t as high as we’d like to believe. Turing tests tell us as much about the observer as the machine being observed3, and early results are not encouraging.
I made rhyming points in a longer essay a few years ago about AI and journalism. Can a large language model outperform humans at the type of investigative and explanatory work that advances common knowledge? Not today, and not tomorrow. But this is a more philosophical question than the more immediate one: how many readers can both discern good journalism from bad and actually care about the distinction?
We all work in demand markets. What people value crowds out what they don’t. While it’s nice to believe that we’re highly rational and skeptical in the news we consume, I’m aware of zero contemporary or historical evidence that this has ever been true. We tend to value speed, certainty, and comfort—at a deep, subconscious, and very nearly fixed level. The news that tells us we’re emphatically right in our prejudices or fears or past arguments will always win out, and doubly so if it gets there faster than any rebuttal. And just as with sloppy human journalism, AI slop is faster and cheaper to produce. My exercise in running a journalism substack for the last five years has brought me to the conclusion that the market for research-delayed and ideologically unsatisfying “here’s what really happened” explainers simply isn’t very big.
I suspect something similar is true for fiction writing too.
The book that David and Mills encouraged me to read when I started at Quora was The Beginning of Infinity. It argues that humanity advances by means of explanations—ie. the product of removing fuzziness from ideas and building up a scaffolding of what actually works and why. This act is fundamentally creative. It doesn’t just mindlessly extend some trajectory of past knowledge to new places. It’s the fruit of intuition, deep mechanical understanding, and irrational investments of sweat and time in proving that our theories can indeed survive all attempts at falsification.
While I agree very strongly with that model of reality, let’s turn to a moneyshot sentence from David’s essay that all this is responding to:
…with this new frontier of [AI] tools, our relationship to language cannot be the passive mode of a lazy reader, or even of a writer who anticipates a lazy audience.
This is true in a direct sense, and I can’t endorse the idea strongly enough. Fiction can be just as much a part of our tower-building towards heaven as non-fiction. The noblest of it follows the same principle of infusion with true knowledge—ie. with ingenious and reliable ideas, expressed in clear and affective language, that increase our ability to understand and engage with the world as it really is. Non-fiction does a better job of expanding the tower, and fiction a better job of helping more climb up.
What I fear though is that even though our relationship to language cannot be lazy, it is. And though writers should never anticipate a lazy audience, we do.
AI is rapidly growing more powerful and more convincing. Whether we’ll ever find truly creative gods in the machine is a complicated question that I’m still agnostic on. My fear though isn’t of the day we do. It’s of all the days in between, wherein both fiction and non-fiction writers who don’t particularly care about advancing our understanding of reality meet the demand of readers with the same insensibility—who leverage these new tools to multiply the amount of anti-reality slop that degrades both our potential and our eagerness to continue building up towards the light.

There’s a weird trap in literary fiction where we sometimes relate to something deeply because it’s true to our experiences and way of seeing the world, but that isn’t actually true in a larger sense because we and the character are both seeing it wrong. Though of course this is thornier when the author knows that the character is mistaken. To present their viewpoint(s) honestly is good art. There are entire curricula about whether an author has any responsibility to make it clear which viewpoints are healthy/adjusted and which aren’t. My sense is that for sophisticated readers it doesn’t matter. As a rule though, I think it’s good when wrong perspectives are paired with their natural consequences. The villains can win, as they do often enough in real life, but never costlessly. Because there is no such thing in the real world as an incorrect perception of reality that doesn’t impose a cost somewhere.
I should be clear that I have no idea if either are using AI for drafting. I chose them just as popular examples of prolific output. Patterson famously uses co-authors who flesh out his outlines. I’m less clear that Sanderson’s process is. But they’re classic pattern cases where single creators who crack the genre code can then flood the market. While those two very well might never use AI, others will and already are, in pursuit of the same dominance.
While the original idea of a Turing test was to judge if an AI could pass itself off as a human, the twin side of this is that the same AI model will fool some humans and not others. So it’s in some respects as much of a literacy test for us as it as test of the machines.
"We tend to value speed, certainty, and comfort—at a deep, subconscious, and very nearly fixed level."
This feels biologically-correct and biologically-influenced-at-its-core. Historical survival of our species (I think) required our ability to quickly process things to determine what was good-vs-bad or safe-vs-dangerous or unthreatening-vs-threatening, but with a very limited time-horizon! It mattered much less that something was safe-now-but-dangerous-3-years-from-now than if it were bad-now-but-good-3-years-from-now. It does feel natural that we'd be hard-wired for finding comfort in things that come to us quickly with a representation of certainty (if, not, correct).
To your final section / conclusion: humans have always been fearful-of and slow-to change. My strong prior has always been that what we're seeing today is no different from what we saw historically. However, it does seem t be that the rate technological change is starting to creep up against the rate of human capacity to change and/or the rate of change for the human capacity to collaborate and collectively adapt to the rate of technological change. Maybe? I dunno. I'm less pessimistic / fearful of you on this, it seems.
Thank you!
"Will the the growing explosion of hybrid work in the interim impoverish us or enrich us?"
I imagine that the ambiguity of "impoverish" and "enrich" here is fully intentional, but the extent to which the two go hand in hand, work against each other, or neither, is definitely a question that has been in my thoughts for years now.