Conversations at the End of a World
On art, artificial intelligence, and the Western myths of progress and doom

The following dialogue between myself, my brother (the calligrapher, stone carver and conceptual artist Nicholas Benson) the painter Leslie Parke, and the photographer and teacher John Paul Caponigro, is partly about the intersection of Art and Artificial Intelligence. More generally, it is also about our historical relationship to technological innovation writ-large, and to the kinds of political systems and power structures that subsidize the most harmful aspects of its proliferation for the profit of the innovators, but too often also at humanity’s, and our supporting planetary environment’s, expense.
None of us here are luddites. John Paul and I wanted to kick off this chat because we each use tech in so many different ways in the making of our pictures, as well as in all the other things that we do professionally: in JP’s teaching and writing, and in the publishing, digital editions printing and book design that I practice alongside my oil painting. I’ve been having a similar back and forth for years with my brother Nick, and with our painter friend, and my fellow Woodstock School alum, Leslie Parke. Nick’s and our family’s ancient trade of inscriptional letter carving has been deeply impacted by digital design innovations over the past several decades. His own art also addresses that meeting of the very old and the very new, particularly in relation to AI. Leslie too has been a longtime and enthusiastic experimenter and adopter of photographic and digital technology as an adjunct to her direct painting practice. And as she has told us here, she has also used AI as an aid in her current memoir writing project. So, in one way or another, all of us have been putting our own skin in this game of art and tech for almost forty years since the early days of the digital imaging and design revolutions.
Christopher Benson:
The main issue I’ve been grappling with, as we witness the increasingly intrusive and dominating influence of digital technologies in our lives, and especially of Artificial Intelligence, is a phenomenon I’ve come to think of as Manufactured Inevitability. I see this as the centrally enabling idea behind all technological advance from time immemorial. It’s that voice that says: “look, I’ve made a bow and arrow! It has arrived and now we must use it because it is inevitable by its very nature (and as an emblem of my own innovative cleverness) that it must be used!”
This mindset was probably present through the whole arc of technological advance from our earliest stone and metal tools, to the wheel, to early industrial innovations like the spinning jenny, the cotton gin, steam power and the internal combustion engine; then on into manned flight, nuclear power, the mass media of radio and television, and now to the internet and AI — to say nothing of all the other scientific advances that have influenced all our innovative developments. Throughout this whole history of technology, we have blithely, even eagerly accepted the idea that it is a force in its own manifestly-destined right, whose dominance over our lives we have no choice but to accept.
What I want to know is, do we really? Or is our sense of any given tool’s rightful precedence over any countervailing preferences we might feel or express, manipulated within us by those who would most like to use it, and who will also most clearly profit from that use? This to me is the defining existential question of our age, and one that it has never been more crucial to address than it is right now.
I don’t object to technology, nor to the ongoing quest for its development. It is wondrous stuff. I only hope to see more critically innovative thinking developed alongside it. As I suggested in my last post here, titled Cavemen with Computers, even as the pace of our innovations has accelerated year after year, century upon century, our relationship to that technology has scarcely evolved at all. And embedded within this notion of technology’s inevitability there seems to be a presumption that we are not capable of evolving. The fore-ordained conclusion appears to be that technology itself is more malleable, more adaptable than we; that it is the “fittest” which will survive, inevitably leaving us in the dust along the way.
Maybe. But I do think it might behoove us to finally call that narrative into question.
Thoughts?
Nick Benson:
The human ego and the economy / capitalization of innovation is completely at the heart of the matter. An interesting perspective on AI is that offered by the female student interviewed in that New Yorker article you sent to me, written by a professor of humanities at Princeton. After having long and complicated conversations with AI, she made a very insightful assessment, and I paraphrase liberally: "I asked it question after question, and what became clear to me in the exchange was not the topic at hand, but the way in which the machine chose to interact with me. I was not judged for asking questions that, in academic circles, would be deemed stupid. I was not treated differently because I was a woman. The machine gave me confidence in myself by not making me feel inferior."
Time and time again, when tech experts report on the dangers of AI becoming sentient, what I hear more than anything else is the human fear of a superior intelligence that could wipe us out. Our genetic programing for fight or flight, and survival of the fittest can override altruism, and at its worst that code can be psychopathic.
AI mimics human behavior because that is what it is programmed to do. I suppose that if it mimics the worst in us we may well have something to fear, but again what will ultimately drive programming is greed.
Leslie Parke:
Yes, this is an important and complex topic, and you're not alone in wondering about it. The concern that AI might turn against humanity is not just science fiction; it's based on real questions raised by scientists, philosophers, and AI researchers about control, alignment, and unintended consequences.
I have used AI extensively to research topics for the memoir I'm writing. I also ask it questions about my writing. I take what I like and leave the rest. When it writes or edits, it can go bonkers. But I experiment with it all the time to learn stuff. I think the problem is that we might be the last generation taught how to write. Just as with calculators, if you don't know the principles running it, you can't determine when it is giving you a wrong answer.
It's a new frontier and it will be a while before we know how it helps and how it hurts.
John Paul Caponigro:
Manufactured Inevitability is a social issue not a physical issue.
There’s no must, but there are statistical odds, so what’s it going to take to stack or beat those odds?
Inevitable:
I’m tempted to say that ideation is inevitable. Objective reality is out there. Things work in certain ways. We’ll figure things out eventually. Trying to stop minds thinking is either oppression (if external) or an advanced skill (if internal). Playing devil’s advocate to my statement about inevitability, cultures shape thought, making more space for some thoughts to occur and less space for other thoughts to occur.
Manufactured:
Action is both individual and social. Some cultures will do what others will not. Even within one culture, some individuals will do what many will not. This may force the collective hand. It takes a universal collective to regulate development and implementation. So far, our collective ability to move from individual / tribal / national mindsets/heartsets has been limited despite our shared global existential polycrisis.
Collectivity:
A similar debate (and some regulation) has been going on, only a little longer, over genetic engineering, potentially an existential threat whether through extinction or accelerated evolution.
The history of nuclear energy and weapons of mass destruction provides useful context for reflection but not prophecy. Though the Industrial Revolution started over 250 years ago (1760), the question of inevitability may be as recent as Oppenheimer, who felt that if the U.S. hadn't developed an atomic bomb, someone else, more specifically Nazi Germany, would have. He didn’t want to develop weapons of mass destruction but felt compelled to do so. Nor did he want to use Fat Man or Little Boy on Japan; he preferred a demonstration only and wanted to give Japan more time to capitulate than the U.S. government allowed. Haunted the rest of his life, he became a strong advocate for international nuclear arms control. After overseeing the project that built the atomic bomb, Oppenheimer lost his influence over the direction of nuclear weapons policy when Edward Teller lobbied for even bigger hydrogen bombs. Oppenheimer recommended tactical nuclear weapons, a direction taken after H bombs were built, and after Oppenheimer had died. Since the first two bombs were dropped on Japan, we’ve improved WMDs and made more than we can use and survive, yet nuclear weapons have never been used on people again, so far. While eliminated in some countries, globally the development of nuclear energy has been slowed, not eliminated. It’s unlikely that we’ll build fusion bombs but we can’t get cold fusion fast enough can we?
Psychology:
These are also psychological issues. Economics is both a social and a psychological issue. To survive this voyage we’re going to have to recalibrate what enough is. Despite our growing population, we have enough food to feed everyone, still there’s both starvation and extraordinary wealth inequality.
AI is potentially a Pandora’s box but that apple in Eden was far too tempting by design.
How’s that for a mixed metaphor? ;)
CB:
When I posed the phrase "Manufactured Inevitability", I was thinking of the practice which Walter Lippman, and later Noam Chomsky described as Manufacturing, or Manufactured Consent, in which those with a vested interest in public acquiescence to some particular socio/political or economic reality will manipulate that consent in a population to whom said reality might, on deeper examination, prove objectionable.
The sense of technology’s inevitability as a sometimes beneficial, but also often intrusive force in our lives, or even a destructive one, is to a large degree manipulated (and therefore manufactured) by those who profit most from its proliferation. They really need for us to believe that there’s nothing we can do about its most destructive outcomes. But despite JP’s good point about Oppenheimer feeling he had no choice but to develop the bomb, this sense of technology’s inevitability is far older than that. I think it is inextricably cemented into the whole, overarching narrative of linear progress that has defined Western thought — from religion to philosophy, politics, science and even art — for millennia. Our cultural stories are rife with fables that validate this view of the inevitability, not just of technological innovations, but of all the deficits they deliver to our doorsteps along the way: the irretrievable evils of Pandora’s box, the all powerful genie that cannot be put back in the bottle, or the apple of knowledge though which Eve condemned us all to be expelled forever from some mythically benign earlier Eden. These are stories that both create and then repeatedly reinforce our sense of that destructive inevitability. They also build on an even deeper foundation of belief that humanity is on a linear and always ascending path of progress from a presumably simpler, more primitive or innocent point of origin, towards an apotheosis in some necessarily higher evolutionary step (which AI’s most fervent creators and boosters of course ascribe to their invention). But there’s also the implied fault of our limitations. That old Icarus fable that warns us against reaching too high. Part of this narrative we’ve been sold is about the hubris of our technological aspirations; that our own moral limitations make us deserving of the downfall. Our self-defeating curiosity and stupidity are the original sins that set us on a path we can neither abandon nor control, towards a hell that we justly deserve.
With regards to JP’s point about “stop[ping] minds thinking”, I would never suggest to anyone that they should shut down inquiry or engagement with any of the difficult questions and inventions we create or encounter. I only hope to introduce a kernel of skepticism into these narratives we take so much for granted, because I see them as such profoundly mythical and also self-serving stories, not to mention potentially disastrously destructive ones. But I have said already and will doubtless repeat it many times before we are done, that I have no objection to technology itself, including AI, or even Nuclear power. All of these are tools with potentials for positive use and further, better development alongside all the real threats they pose.
In an earlier private exchange with Leslie, she objected to my use of the term “Agency” in relation to the use of AI. I’d said something to the effect that we all need to claim some personal agency in resisting the inexorable march of developments we have a right, or even a duty, to question. She correlated that word though to the very specifically gendered lack of autonomy and power that has long been experienced by women in our male-dominated, patriarchal culture. I get it. But there is a way in which all that I am describing is also the legacy of exactly that same top-down, hierarchical and repressive patriarchal DNA that resides in the whole historical trajectory of the West, right back to Pandora and Eve, two presumptuous mythical women who dared to make autonomous decisions for themselves. So why not call it agency in their name?
I don’t believe for a minute that "the genie can be put back in the bottle", as Leslie said. But I do believe we have both the right and the ability to guide the genie in those ways that suit us best, or even to walk away from it when that seems like the right thing to do. Determining when and how to do either of those things is what we need most to address now.
LP:
It seems to me that you’re making assumptions about what’s still possible here, about the choices we have. But do we really have choices? Have you tried to find a map lately? In other words, we’ve surrendered control to technology in small, incremental ways, and now there’s no infrastructure left to support doing things differently. Should we have let that happen? Probably not. Could we have stopped it? Also, probably not.
In a way, I think we’re already living in a world where the lights have gone out. And as people who still remember what it was like when they were on, you’re suggesting we choose to keep them on, even though that may no longer be possible. In just a short time, the very concept of truth has been dismantled. This isn’t AI’s fault, but AI will accelerate it. You could blame the powers that be, or just capitalism. Simply put, truth has become whatever the ultra-wealthy and those aligned with them say it is — FOX News, etc. You get the idea.
So in that sense, yes, I agree with you: put on the brakes, make other choices. But will we? If we could be assured that AI, or any technology, was grounded in “the truth,” wouldn’t we welcome the change? But maybe the invention of AI has simply revealed that truth is relative, and probably always has been. Maybe it is holding up a mirror, showing us the hologram of our own projections. That’s where my objection to your use of the word agency comes in. I’ve always lived inside the male-dominated hologram, where men decided what truth was, and it turns out that their version wasn’t accurate. Nor is my white version for that matter.
In this moment, I’ve chosen to live in a world of my own making. I’ve consciously filtered out what I can’t accept. I’m trying to construct a new hologram. And I’m using technology to help me do that. But my hologram isn’t any more “true” than theirs. It’s just more pleasant.
A friend of mine worked on a project that used AI to mine climate data from reliable, vetted sources. This wasn’t a casual AI search; it was carefully controlled. The idea was to make trustworthy information available to any organization working on climate issues. But it wasn’t accepted because AI had been used. That’s just stupid. They chose to believe that AI couldn’t possibly return 100% reliable information, so they threw the baby out with the bathwater.
NB:
I agree with Leslie, that agency / the ability to control the course of this runaway train, is not something we can do.
As I watch more and more very wealthy people move into my hometown and change the nature (as I have understood it ) of this place to suite an AI-generated image of the new American Dream — in wealth, social connection and ersatz happiness in the form of lavish homes, expensive cars and grand social events — I have also closed my doors and focused on a world of my own making, or should I say keeping those old incandescent blubs glowing.
I also cannot agree more about Leslie's hologram of experience, and how that reality can be, at times, torn aside and viewed through the limited perspective of our immediate day-to-day lives, and the privileges that can come with them.
"This is not the fault of AI, but it will be compounded by it." This has been the case for over a decade now and my perspective of this evolution as dystopian is hard for me to change. Strangely, despite my pessimism, I have some odd idea that AI, if it does reach general intelligence or beyond, may try to save us from ourselves. Wishful thinking I guess. Time will tell.
I’d like to add that group dynamics and individual assessment are two very different things. We all know this, and of course Nazi Germany is the go-to for the archetypal example, but an interesting twist on the similarities of our current political situation in the U.S. and 1930s Germany is the smart phone. It is quite literally an algorithmic propaganda machine. It is the digital personification of Goebbels, and I’d say more effective.
LP:
I want to throw in one little "fact" that came to my attention recently. For my entire life, the explanation of fertilization was pretty much characterized as a survival of the fittest scenario, where the stronger, faster, more powerful sperm zoomed ahead and penetrated the egg. Well, it turns out that that description of fertilization is not true. The sperm have very little to do with "winning the race, being the best, etc." Put in layman's terms, it is the egg deciding which sperm she wishes to attract. But the other story persisted because it was written and told by men.
Isn't there something, somewhere, about a thing being changed by being observed? To which I'd like to add — a thing changed by being observed by whom?
In telling "the truth," we have rarely been able to escape our prejudice. I think what we are longing for is "accepted truth." We want the story that we tell ourselves to be the story that is generally accepted. But again, as a woman, I have to say that hasn't worked out so well for me.
What I find scary now is that so many people are not interested in the "accepted truth." They often know that what they are being shown is horseshit, and they don't care. They just like that it is being shoved in our face (our liberal, educated face.)
JPC:
I’m enjoying the many fine points you’ve all been making.
Faux inevitability has been and is being manufactured purely for profit or advantage. It might seem like inevitability is an absolute; it either is or isn’t. Maybe inevitability is more a matter of odds. Is something that is 99% likely, inevitable or virtually inevitable? Given enough time, the one in six chance of rolling a three on a die is inevitable, unless you stop rolling the die. In a collective, everyone has to stop. One country opting out of the arms race, like Japan, doesn’t stop the arms race or stop others from wanting to join it. Collective will being our scarcest resource and something you can opt out of at any time, the odds are rising.
AI isn’t inevitable; it's here. I think we’re all searching for its good uses and growth and hoping to avoid harm. It is inevitable that we’ll make some mistakes along the way, hopefully recoverable ones.
Is AI a choice? Despite Stanford neurobiologist, Robert Sapolsky’s Determined: A Science of Life Without Free Will (an incredibly well-grounded argument that free will is an illusion and our actions are determined by a complex interplay of biology, environment, and past experiences) I still have faith that free will exists, so I'm going to use that illusion. I think I can decide to use or not to use AI, therefore I can? It’s about as hard not to use AI as it is to avoid using plastic. I’m talking about smartphones, browsers, apps, the businesses I transact with, like banks and stock markets, and our government.
I’d like to respond to Leslie‘s very self-aware comments about the construct of truth being deconstructed.
I’m deeply concerned about the abandonment of attempts at objectivity, rational discourse, and argument (in the best sense of the word, making a case for something) which we’ve been experiencing since before AI but which has been accelerating because, as Leslie so astutely points out, it can create such credible fictions so easily.
I’m waiting for AI to out the deep fakes. What better tool to tell us when we’re consuming altered or generated texts, images, audio, and video than the things that create them? Build urban legend detectors into the systems that deliver us information. Right now, we could say nothing is credible (negative) or we’re all incredulous (positive).
I wish for a media-sphere that will point out errors, lies, red herrings and all other forms of rhetorical manipulation. I’d pay for an intermediary who would do that for me. While I’m waiting, without holding my breath, I’m training myself to spot them better and helping others when I can.
I feel a need to do my part and become much clearer about the differences between facts (data), meaning (information), and theories or truths (both of which are constructed). The term “alt-facts” is one of our most insidious recent inventions — facts are facts. I’ve been trying to memorize Carl Sagan’s Baloney Detection Kit (Demon Haunted World). I carry it with me in a note on my phone. When it wasn’t sticking, I put an AI summary of it in the same spot to help me.
I’d also be very interested to hear how each of you is using AI.
CB:
Maybe we’ll tackle the topic of how each of us is using AI in the next installment, if we decide to carry on from here. For now, I’d like to end today by widening the lens a bit to incorporate a deeper question about all human affairs within which this narrower topic of technology’s dominance is nested. Leslie asked earlier what “map” we might consult to guide us onto an alternative path from this AI juggernaut. But my premise from the beginning (that we can choose how best to use these tools for ourselves, rather than merely accepting their growing interference and dominion over our lives) flows very much from a lifelong personal and professional habit of drawing my own maps rather than consulting those made by others. That said, I have also been strongly influenced over the past two years by the late David Graeber’s and David Wengrow’s extraordinary 2021 book The Dawn of Everything, whose massive 704-page text I’ve now listened to, end to end (in audio form) at least four times, while also repeatedly going back in to revisit specific chapters.
Graeber and Wengrow plumb a deep well of recent research in their conjoined disciplines of archaeology and anthropology to suggest a radically new picture of the many sophisticated social and political experiments which actually proliferated all over the globe throughout the Stone, Bronze and Iron Ages. These are offered as a counterpoint to a more simplistic, but long-accepted tale in which small, primitive bands of hunter-gatherers (Jean Jacques Rousseau’s “Noble Savages”) prevailed worldwide for tens of thousands of years then suddenly blossomed, roughly four millennia ago, and as if overnight, into the civilizing revolution of agriculture and the rise of the great cities which it enabled for the first time. It was only with these developments, the old story goes, that we see the arrival of the large, hierarchical, coercively enforced and bureaucratically administered “states” we have known ever since, whether in early authoritarian chiefdoms, in monarchies, empires, or even in our more nominally democratic systems which nevertheless also maintain social control through threats of coercive force from above.
Throughout the book, the authors debunk Rousseau’s Edenic hunter-gatherer myth by citing example after example of elaborately fluid, adaptable and highly sophisticated pre-historical civilizations regularly rising and falling for tens of thousands of years before the supposed touch-down date of agriculture (including evidence of effective seasonal agricultural practices existing in partnership with intermittent phases of hunting and gathering, not to mention alongside the establishment of densely populated cities around which even larger populations spread out into often vast geographic areas).
Graeber and Wengrow suggest that these early social experiments were enabled by their participants’ feeling of the freedom to try them on, as well as to abandon them when the results proved less than satisfying. This idea of freedom and flexibility especially stands out to me in the context of our present conversation. In the authors’ formulation, these were the conjoined freedoms ". . . to move, the freedom to disobey and the freedom to create or transform social relationships."
Looked at from this more fleshed-out view of the emerging historical record, it would seem that the only truly revolutionary development at that moment when Agriculture supposedly changed the world was a collective social acquiescence to the termination of that previously long-held freedom and fluidity in exchange for the apparent stability of the hierarchically-ordered mechanisms of the state. This was the point, the authors argue, at which we became “stuck” within the range of superficially variable, yet similarly rigid political frameworks that we in the west especially have inhabited ever since. Periodic social upheavals have enabled the rulers and gatekeepers of these systems to change their masks (the King to the Emperor; the Catholic Pope to the Puritans’ Bicameral Court; the Tzar to the Duma to the Politbureau; from all monarchies and empires to parliamentary Democracies, and most recently through our long slide into a corporatized and increasingly authoritarian form of oligarchy). But irrespective of whichever political window dressing in which any of these systems may choose to deck itself out at any given point — some benignly democratic, others more autocratically despotic — all of them have equally depended upon our complete forfeiture of the freedoms to leave those systems, to disobey their rules, or to build alternative systems of our own choosing.
Full disclosure: Graeber himself was an avowed anarchist, and one of the leaders of the Occupy Wall Street movement which grew up in the wake of the financial collapse of 2007-08. That political preference might be sufficient for anyone who doesn’t share it to ignore or delegitimize his arguments. But both he and Wengrow tackle their subject not as philosophers, nor as political or economic theorists, but as scientists. And it is upon the most rigorous recent scientific research about the cultural experiments of our earliest human history that they base their countervailing thesis, however much they may fit those findings to the story they most wish to tell: to wit, that the long-held image of a hundred thousand years of essentially ape-like egalitarian primitivism miraculously giving way near its end point to the more entrenched top-down and coercive state bureaucracies we all live in now (systems without which, we have been persuaded, large, complex and sophisticated cultures like ours simply could not function) is a complete myth and untruth. As such, this actual history may crack the door open at last for us to at least imagine, but perhaps also begin to design, some truly innovative ways to leave and disobey the imposed limitations of an old and broken world, and to rebuild something new and better.
And who’s to say, perhaps AI, if judiciously employed, could even help us do that.