Is being able to watch a violent attack by a knife-wielding man on an Orthodox Bishop a necessary act of free speech? Does it follow that the decision whether we should be able to view such material should be made by a lone Commissioner making the decision on our behalf?1
What occurs to me in all this debate2 is that it is another example of the way in which digitised information exposes the fact that the presumptions of the analogue world no longer stand as self-evident.
I don’t agree with the conclusions in this piece by Alice Dawkins, executive director of Reset.Tech Australia, but this section is well put. She is speaking about the act under which the eSafety Commissioner currently operates and the proposed Online Safety Bill:
The act and the bill share elements of an increasingly outdated approach to digital platform regulation, where well-meaning policymakers have carried across principles from traditional broadcasting to digital media distribution that cannot scale, burden the wrong players, and may inadvertently stoke institutional mistrust.
When the medium is the message, we not only have to rethink the old rules, we have to try and understand the way in which medium and message are interacting.
The algorithms that drive the exposure of online content—including potentially dangerous or offensive content like the livestream of the attack on the Archbishop—sort data into ranked lists and serve up what is most in demand. In so doing, via our choices, it recategorises us as into ever-finer grades and shades of preferences. It makes visible (through invisible, black-box calculations) preferences that we were never previously able to differentiate in any aggregate manner.3
The ranking algorithms of various platforms allows us to self-categorise away from collective presumptions that govern traditional matters like the Act and the Bill and allow us to arrive at a much more individualistic understanding of what is acceptable and what isn’t.
In the wake of the confrontation with Musk over whether or not his X platform should take down the offending material, there has been a lot of talk about “social license”—or social license to operate (SLO) as it is sometimes called—the idea of an “intangible and unwritten agreement that reflects the ongoing acceptance or approval from a local community and other stakeholders for a company's operations or projects.” The implication by people like David Crowe, writing at Nine Entertainment, is that Musk lacks this license. But I mean, if you want to talk about social license, what is more licensed than millions of people choosing to avail themselves of platforms that surface these preferences?4
The medium is the morality.
Too much of this discussion has an air of moral panic about it—whether it is claims of “surveillance capitalism”, or debates about the desirability of making violent videos available online. It overlooks the fact that adults can and should be able to make their own decisions about these things.5 We have agency, and regulation should recognise that aspect of our relationship with these platforms as much as it recognises our vulnerability. It shouldn’t just presume we need to be protected from content whose level of threat is deeply contested.6
As Kieran Healy and Maria Fourcade note in their new book, “Social classifications are entrenched in people’s emotions, in their bodies, and in their everyday practices. This makes them hard to change. But change happens anyway.”7
The abundance of rich, multidimensional, digital data and the means to analyze it has profoundly affected how social categories are made and how people sort themselves or are sorted. Relative to their analog predecessors, classifications produced by computer code sifting through digital data are more likely to be anchored in direct measures of behavior. They also tend to be more fine-grained, inductive, and flexible. And they are often more opaque, in the sense that they may depart from established categories and fail to be readily interpretable in terms of them.
…Certain kinds of classifications, typically those applying to human or social collectives, are “interactive” in that “when known by people or those around them, and put to work in institutions, [they] change the ways in which individuals experience themselves—and may even lead people to evolve their feelings and behavior in part because they are so classified.”
Fourcade, Marion; Healy, Kieran. The Ordinal Society (p. 91-92). Harvard University Press. Kindle Edition. (My emphasis.)
Does this mean that Musk gets his way and that the government should vacate the ground and let the algorithms do their work? Not at all. The government has a legitimate role in such circumstances, much more so than the stray billionaires of legacy or social media. But let the government at least recognise that the answer is a lot less clear cut than their once-size-fits-all legislation allows and that their own social license will increasingly depend on recognising that we are not in analogue Kansas anymore.
In their submission responding to the draft bill Labor put forward last year, Digital Rights Watch suggested that the government should:
Establish a multi-stakeholder review board for activity covered by the Bill. There is an international consensus that content moderation and take-downs require robust oversight and accountability to prevent abuse of power. The review board should be included in the Bill as a mechanism to review decisions made to remove and block content by the Commissioner. The Board should be made up of the groups most impacted by the proposed laws, including sex workers and activist, and meet regularly, at least annually, to closely examine how decisions are being made by the Commissioner’s office across a spectrum of complaints and investigations.
I would take it step further and add to such review board, a rotating panel of non-experts—ordinary citizens—who also have oversight of the decision-making process. In an era of algorithmic individualisation, the only way of providing ballast against the total social atomisation that the algorithms encourage is to design our institutions in as social a way as possible, to build processes that allow us to think with as much of the social brain as possible.8
Maybe the footage of the Archbishop being stabbed that is at the heart of this current discussion is a straightforward example of something the government should concern itself with and regulate. It is a lot loss clear when it comes to the horrific images of the violence happening in Gaza. Much of that content has not made its way into the mainstream media and we must be deeply suspicious as to why. It isn’t just the media self-editing to avoid social harm; there is obviously a political rationale for their choices.
Platforms that surface such material shouldn’t be regulated out of existence.
We can’t unsee the divisions these algorithms have exposed (and help create), and so to the extent that we/governments want to have some notion of collective agency in these matters, some way of not just atomising us into individualistic feedback loops, the mechanisms we use to police these decisions need to reflect and embody a genuine collective understanding of what is acceptable and what isn’t.
We should no more let ourselves be infantilised by governments than we should let ourselves be conned by the self-serving and disingenuous appeals to free speech spouted by the likes of Elon Musk.
Bernard Keane has already laid out a decent version of the civil liberties argument around all this, including touching on the hypocrisy of Musk and the tendency of governments to overreach, so there is no need for me to rehash it.
Let’s graciously bestow that descriptor on the squealing of politicians and the unacknowledged biases of mainstream media journalists who, on their own definitions of objectivity, are deeply conflicted in this matter.
This is stuff I have looked at in both Why the Future is Workless, and The Future of Everything (link) and that has (more or less) preoccupied me for a while now.
What we have traditionally called social license is perhaps just another top-down imposition of elites. Before the existence of such classificatory possibilities that the algorithms of these platforms make possible, most of us were happy enough—or had no alternative—in letting a collective decision arise mediated through laws, regulations and the sort of debate we have usually associated with the media and the public sphere, all matters in the hands of traditional gatekeepers. The point is, those days have gone.
I recognise in saying this I am back in Bernard Keane territory.
This suggests the need for an overarching statement of values, or rights, which provide at least some semblance of the shared values that would guide the writing and implementation of regulations around these matters. A protected right of free speech seems a minimum requirement if we are not just going to be jumping at shadows and restricting freedoms every time something like this happens.
I’m only about a third of the way into this, but it is really interesting and very accessible.
To the extent that journalism now operates in the same digitised space and is subject to the same sort of audience atomisation, traditional notions of objectivity have to be reconsidered. But I’ll talk about that in a future article.
I can’t help agreeing with you again because I have found that knee jerk reactions by governments while necessary to act against certain overreaches by the tech giants ,At 75 yrs I guess Have realised that we,the public have allowed governments to dictate while pretending something is « for our own good » We do need to vote in more educated thinkers into parliament and be very wary of those who spout religion in order to make amoral decisions such as we saw with robotdebt .Thank you for thought provoking at a time when thinking has become « woke » 🤮
Me Too ! However no one can explain what WOKE means so I imagined that it is lefty,sooky,weak because we give a fig about state of the world and tend to not like the idea of Trump in charge ?