Amara’s Law holds that we tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run, and I think it is one of the most clear-sighted understandings of technology’s effect on society that I know.
We can add to it, Tim’s Law, which states that any new technology will bring forth from the mainstream media a slew of chicken-little stories that will tend to confuse rather than enlighten us about the pros and cons of whatever technology is under consideration. These stories will have a disproportionate effect on the political class.
Both laws currently hold for discussions around ChatGPT, GPT4, Bard and other generative artificial intelligence platforms and the use of large language models (LLMs) being deployed in an increasing number of areas.
The media focus on (mainly) worse-case and (sometimes) best-case scenarios obscures the boring and difficult reality that technology always emerges in particular social and political environments, where political power operates in a particular way, and it is that underlying social organisation that determines the way a given technology will operate and the effect it will have on work, education, or society in general.
As Richard Eckersley wrote recently:
Our situation represents what I have called ‘the demise of the official future’, a loss of faith in the future leaders promote and claim they can deliver. This ‘futures gap’ stems from political and journalistic cultures that are too heavily invested in the status quo, unable to see beyond their limited and constrained boundaries and horizons. Mainstream political and media players face a growing need to manage differently the ‘cognitive dissonance’ between how they think about the world and their work, and the emerging realities of life today and its existential challenges – instead of largely ignoring the latter, as they have done.
The news that the Albanese Government is considering serious regulation on various uses of AI because they think it might lead to human extinction—while continuing to dick around at the edges of various climate-change policies—is a perfect example of this dissonance. As science writer, Ketan Joshi, noted on Twitter:
I’m not saying that such AI regulation mightn’t be necessary—it likely will be—but can we focus please?
Anyway, I have been trying to get my head around how different AI platforms might be useful to the work I do here, and I ran a little experiment to compare three different tools, and I thought it was instructive to look at the results.
I have found that these AI apps can be quite useful in summarising topics or documents, and in so doing, seem less vulnerable to bad information problems than I might’ve suspected. If they continue to improve, this sort of use-case is likely to be very helpful in a range of areas, including journalism.
The three apps I used were Bard, GPT4, and Perplexity.
The difference between the three models was really interesting, I thought, and I set them out below.
Keep reading with a 7-day free trial
Subscribe to The Future of Everything to keep reading this post and get 7 days of free access to the full post archives.