Civilization and its Canned Contents
On Luddites, Artificial Intelligence, and the age-old battle of Man versus Machine
In a recent post about artificial intelligence, I wrote that my regret over the demise of handwritten notes, coupled with my distaste for electronic medical records, makes me sound like a Luddite. And I admitted that I kind of am a Luddite. But AI is moving fast, and I have come to reconsider.
First, let’s define terms.
The Luddites originated in 19th-century Nottingham, England. A secret oath-based society of textile workers, workshop owners, and other representatives of the cottage industry, they went around smashing, burning, and otherwise destroying the machines and technology that they blamed for putting them out of business.
The Luddites were serious. They met at night to practice military-like drills and maneuvers and sent death threats to factory owners and politicians.
But the British army (fresh off the battlefields of the Napoleonic Wars), the government, and factory owners were not playing around either. They shot the Luddites, beat them, arrested them, and sentenced them to execution or worse – penal transportation to godforsaken kangaroo-infested wastelands like Australia.
The movement culminated in a region-wide rebellion that lasted from 1811 to 1816 before it was finally suppressed. In a cool and telling historical twist, the legendary (and fictional) leader of the movement, King Ludd, was said to be based out of Sherwood Forest, just like Robin Hood. Only instead of robbing the rich to give to the poor, King Ludd raged against the machine, striking out against the forces of industrialization that were destroying the old ways of life.
Today we use the term Luddite flexibly. It can refer to anyone suspicious of technology – from high school hipsters with flip phones and moleskin notebooks, to boomer doctors nostalgic for handwritten notes, all the way to unhinged terrorists like Ted Kaczynski.
As you may suspect, I belong to the middle group, at best a fair-weather Luddite: partial to steno pads but tech-forward when it comes to practice management; unhappy with Zoom but happy to use telemedicine when appropriate; not a fan of AI-generated poetry, books, and essays, but yes a fan of Grammarly’s advice on comma placement.
Then I listened to a podcast of Ezra Klein, a New York Times journalist, interviewing Gary Marcus, an emeritus professor of Psychology and Neuroscience at NYU (to those loyal readers wondering why I always seem to be talking about NYU – it’s a coincidence, I swear!).
Professor Marcus’ thesis is that the advent of AI represents a “Jurassic Park moment” for humanity – an inflection point where the curve of change, which has been bending ever upward since the industrial revolution, veers steeply into completely uncharted, potentially perilous territory.
In his words, “Because such systems contain literally no mechanism for checking the truth of what they say, they can easily be automated to generate misinformation at an unprecedented scale.”
Now Marcus is not a Luddite. He studies AI and is a co-founder of several AI ventures. He is talking about the production of misinformation – with which we are all familiar from Covid and elections specifically, and from using the internet and social media in general.
As bad as it is now, Marcus says, AI has the potential to make it way worse. With AI, as he puts it, the cost of producing bullshit (defined as information that has no relation to the truth) goes to zero.
Russian troll factories, deep fake videos, even false news articles – they all carry a cost. You need to hire the trolls, set them up with workstations, and train them in what to write and how to make it sound believable. The same goes for the cost of producing deep fake videos. Content costs money, right?
Maybe not for long.
At one point in the podcast, Ezra Klein relates that he asked ChatGPT to write something in the style of Ezra Klein from the New York Times. He was floored, he said, by the result, which sounded exactly like him, verbal tics and everything (if you listen to him you know what that means).
The day may not be far off when I will have no way of knowing if I am listening to him or to some deep fake counterfeit that sounds exactly like him but is in fact created purely to manipulate me, sell me a product, or feed me propaganda.
In fact, this is already happening, as will attest those victims (a particular female cartoonist comes to mind) of imposters who forge and maliciously distort their work online.
How do you know that this blog post was written- with effort and intent – by me? Maybe it was written – in seconds and for free – by an AI algorithm like ChatGPT instructed to sound just like me (don’t worry it wasn’t – but then again, that’s just what the algorithm would say, isn’t it?).
Marcus argues that as AI drives the cost of producing bullshit to zero, the production of misinformation produced for ulterior motives will explode. It will become increasingly difficult, maybe even impossible, to distinguish truth from lies, news from propaganda, and marketing from art.
Now, you may argue that we already live in such a time, but the fear is that AI will turbocharge the process by flattening and democratizing it, thereby obliterating the distinction between real and fake, at least online.
That is what Marcus means by a Jurassic Park moment.
Sounds like a pretty good argument against technology. So why do I say that Marcus makes me question my Luddite-ness? Because I see a silver lining in his dystopian vision.
After all, if the cost of producing bullshit goes to zero, it stands to reason that the value of truth will correspondingly rise. The question will become, where can truth be found?
As AI makes digital information unreliable, we may have to revert to HI as a source of truth and authenticity. What is HI? Human intelligence – information derived from real people in the real world in real time.
It may help to think in terms of canned content versus fresh content.
Canned content is information that can be removed from the vehicle that transmits it. It can be abstracted, duplicated, manipulated, and transferred from one shell to another. The epitome of canned content is digital data – information in the language of machines.
Fresh content is information that cannot be removed from the vehicle that transmits it. It cannot be abstracted, duplicated, manipulated, or transferred from one shell to another. The epitome of fresh content is the human voice – analog data in the language of human beings.
Examples of fresh content include performances, classes, group gatherings, and demonstrations. Think of poetry slams, theater, flash mobs, rallies, performance art, and book readings – these things cannot be mass-produced, reproduced, or widely disseminated. They have to be experienced in person.
Fresh content is teaching that cannot be separated from the teacher; art that cannot be separated from the artist; writing that cannot be separated from the writer; happenings that cannot be separated from the place where they happened.
Mind you, the relative worth of fresh content is nothing new.
We all know by now that interacting with people is better than interacting with screens. It’s one reason why my kids love summer camp, where no one is allowed to have a phone.
And it’s why they have a peculiar revulsion – considering their youth – at innovations like NIFS and the Metaverse. Addicted to their phones, they have adopted the addict’s revulsion at the prospect of the drug taking over even more of their lives.
When I think back on my most formative educational experiences, they all took place in a classroom, around a seminar table, or one-on-one during office hours and tutorials – with the assimilation of knowledge inextricably bound up with the influence of the teacher.
But just because we may all agree that fresh content is better, doesn’t mean that one can make a living by producing it. Just ask the original Luddites. They could extol the superiority of their artisanal sweaters until the cows came home; that didn’t stop people from buying machine-made at a fraction of the price.
The problem is that, just as the Industrial Age flooded the market with factory-made goods, the Information Age has flooded the market with canned content. Could the ironic solution be that AI now comes to sabotage the process by turning fresh content into the only reliable source of truth?
After all, it’s one thing to say that you prefer to see Ezra Klein at the 92nd street Y rather than online because it’s more fun; it’s quite another to say that you need to see him there because how else can you be sure what he really “says”?
Maybe as AI turns canned content into free bullshit – thereby driving human creators out of business – it may boost a parallel market for fresh content – thereby providing creative humans with a new source of livelihood.
It wouldn’t be the first time. Consider the music industry.
As digital technology made it cheap and easy to copy and disseminate music, musicians saw their source of income change. Instead of making their living from selling records, they had to leave the studio and go back on stage. Today, performers generate more than 3/4 of their income from live performances, as opposed to less than 1/3 in the 1990s.
If Marcus is right, then the experience of musicians is about to be generalized to all creative fields.
And if we really are on the cusp of that Jurassic Park moment – if technology has brought us full circle to where we have no choice but to trust humans more than machines, HI more than AI, fresh content more than canned content – then I guess I’m not such a Luddite after all.