Category errors about ChatGPT and other information platforms
The imagined community of the digital world
I want to contextualise the following discussion with an evocation of Amara’s Law, just to ground us a little bit: “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.”
Carry on.
Part One
If you’ve been online at all over the last few weeks you would’ve noticed that everyone—everyone—is talking about ChatGPT and other artificial intelligence apps, including, Sydney, which is allegedly the name of Microsoft’s Bing equivalent. Most of the discussions are a bit breathless, though in fairness, you can see why these platforms are causing a stir. Whatever their shortcomings—and they are legion—such tools will undoubtedly change the world. I don’t think that’s an exaggeration.
As ever, though, it is the nature of the change that matters, and that’s what I am trying to get my head around.
I’m continually struck by the extent to which we are stuck in what I’ll call a pre-digital mindset, that we still think of the world as if the communication technologies of the digital age hadn’t fundamentally changed how we work, interact, think and behave. As if we had just exchanged paper for screens.
It is a mindset that can be hard to break free of.
The very nature of legacy analogue tools—from newspapers, to books, to libraries, to filing cabinets—and how they organise knowledge, makes it hard for us to think digitally, to think about the consequences of digitisation itself and the changes it brings to how knowledge is organised, shared and understood. That is to say, the technology changes us as much as we change the technology, and we still kind of live in a Gutenberg world, at least inside our heads. We tend to think inside the box. Or the book. Or the newspaper. Even the symphony or the song or the ballad or the framed work of art.
Guy Rundle makes a similar point in a recent Arena article (and he’s one of the few in Australian media to even recognise what is going on).
Noting that ChatGPT “smashes” the barrier implied by the Turing Test—“which argues that a computer can be said to display ‘intelligent’ behaviour when a human, in extended text-based interchange with it, cannot tell if they are engaging with another human or a machine,”—Rundle says that most of our responses are being restricted by the way the previous technologies have taught us to think:
Much of the discussion of ChatGPT’s incidental effects strikes me as primarily a way of avoiding the deep challenge it presents. …
There is a circularity in this process that loops through decades of the history of modernity. That there is a fundamental institutional inability to take a critical, reflective grasp of a new technology such as ChatGPT—and AI visual composers such as Dall-E and Midjourney—is a product of the victory of a manner of thinking that has made its runaway development possible in the first place. This is the ‘analytic’ mode of thinking, born in Vienna and largely Anglo-American thereafter, which responded to the European reflective tradition with a denial of its holistic and interpretive orientations.
The words I think of when trying to understand the way in which our thinking has structured itself in the periods Rundle highlights are “linear” and “atomistic”. The written word, and the devices we use encourage this sort of thinking, and that is the very thing digitisation undermines.
To put it as simply as possible, making a newspaper available online, transmittable by digital means—website, social media app, messaging app, email—isn’t just an alternative to paper. It fundamentally changes our relationship with the production and dissemination of that knowledge.
A digital newspaper—or a website, or network of websites—organises knowledge in particular ways that affect how we access that knowledge, how we learn it and use it, how we share it and think about it. We go from a much more linear way of processing knowledge to something more network-like, in a way that changes hierarchies and relationships.
Again, the form of the newspaper illustrates the point. The paper newspaper was a portmanteau vehicle for the delivery of knowledge that brought together information connected by little more than a temporal relationship—these disparate things just happened to occur at roughly the same time—and that juxtaposition meant we saw “the world” in a particular way.
The digital newspaper smashes that temporality, that portmanteau form, and encourages the development of niches which in turn encourages a “deep dive” into the “rabbit hole” of various topics. It is an intensification and decentralisation at the same time.
The medium is the message if you like, and even in writing an essay like this, I realise the extent to which I am guided by technology that allows me to pull together information and construct an argument in a way that simply isn’t possible with paper.
Something is lost and something is gained in all this, the point being that I don’t think those of us of a certain age—and who still tend to run the world—have caught up, within ourselves, with the ramifications of these changes, and this alone makes it hard to see what ChatGPT and similar tools might actually mean.
A similar thing happened with mathematical calculation once it became computerised and could operate at a speed and scale beyond what humans were capable of unaided. As Conrad Wolfram has said, “computers liberated maths from hand computation, enabling it to be widely and deeply deployed—far beyond what was possible without,” and that this “change is deep and fundamental, arguably the most dramatic mechanisation of any field,” and that it transformed the world “because the bottleneck of human calculating disappeared.”
Another human bottleneck is about to be obliterated.
AI such as ChatGPT, which can process language at a speed and scale on a par with what calculators did with numbers and mathematics, is in the process of further transforming our relationship with language-based knowledge and information, and we will, as ever, stumble along, playing catch-up.
But mathematical calculation is not the same thing as language-based composition.
The main difference between a calculator doing maths and ChatGPT writing an essay is that a calculator performs mathematical computations based on pre-programmed rules and algorithms, while ChatGPT generates human-like text by analysing patterns and relationships in large amounts of language data.
When you use a calculator to perform a mathematical operation, the calculator follows a set of predetermined rules and algorithms to calculate the result. It doesn't have any understanding of the mathematical concepts involved or the context of the calculation.
On the other hand, when ChatGPT writes an essay, it uses machine learning algorithms to analyze and understand the structure and content of the input text, and then generates a response based on that understanding. ChatGPT can analyze and generate text that incorporates complex language structures, context, and even cultural references.
In short, a calculator is a tool for performing mathematical operations, while ChatGPT is a tool for generating human-like language. Both tools have their own specific purposes and are designed to perform specific tasks in different domains.
The reason I put those paragraphs in a different font is to show that I did not write them: they were written by ChatGPT itself, and it is a good illustration of what we are dealing with. I was trying to develop exactly this point after reading the Conrad Wolfram article and I thought I’d ask the AI and see what it said.
I suspect this sort of use will be typical of how such tools are deployed, an aide memoire, a quicker way to state an obvious point, a shortcut to some uncontroversial knowledge that can be deployed for clarification, emphasis or whatever.
But even in that relatively simple use-case, we still come up against the problem of the underlying data sets and the nature of the knowledge that the AI is drawing upon. Even in the realm of art, as Benjamin Bratton pointed out the other day, the various AI are drawing upon a potentially degraded set of image data, and he notes that “it …. scrapes the cheapest and easiest content it can find out in the open where no one cares, and thereby this crap is being immortalized. It is the aesthetics garbage content.”
The same is true of the written word to which ChatGPT et al have access.
The things that concern so many people about current systems of online knowledge are matters to do with accuracy, the ability to spread false or fake information, the effects of political polarisation and the spread of conspiracy theories, and rightly so.
But they may just be early order effects of our brains adjusting to organising knowledge in a post-analogue world, and that is what I want to look at in Part Two.
Part Two - The new imagined community
It helps to remember that a similar thing happened—and similar concerns were raised—with the invention of the printing press: the initial reaction to this democratisation of knowledge was handwringing about frivolity and inaccuracy. Jeff Jarvis quotes Matthew Kirschenbaum who has written:
This reminds me of the sometimes rancorous debate between Elizabeth Eisenstein, credited as the founder of the discipline of book history, and her chief critic, Adrian Johns. Eisenstein valued fixity as a key attribute of print, its authority and thus its culture. “Typographical fixity,” she said, “is a basic prerequisite for the rapid advancement of learning.” Johns dismissed her idea of print culture, arguing that early books were not fixed and authoritative but often sloppy and wrong (which Eisenstein also said). They were both right. Early books were filled with errors and, as Eisenstein pointed out, spread disinformation. “But new forms of scurrilous gossip, erotic fantasy, idle pleasure-seeking, and freethinking were also linked” to printing, she wrote. “Like piety, pornography assumed new forms.” It took time for print to earn its standards of uniformity, accuracy, and quality and for new institutions — editing and publishing — to imbue the form with authority.
I think this is getting us closer to where we need to be when thinking about how to use ChatGPT (et al), and how to regulate such tools in general.
But we are still
Keep reading with a 7-day free trial
Subscribe to The Future of Everything to keep reading this post and get 7 days of free access to the full post archives.