The world has witnessed breathtaking advances in generative artificial intelligence (AI), with ChatGPT being one of the best known examples. To prevent harm and misuse of the technology, politicians are now considering regulating AI. Yet they face an overlooked barrier: AI may have a right to free speech.
Under international law, humans possess an inviolable right to freedom of thought. As part of this, governments have a duty to create an environment where people can think freely.
As we’ve seen with ChatGPT, AI can support our thinking, providing information and offering answers to our questions. This has led some to argue that our right to think freely may require giving AI a right to speak freely.
Free thought needs free speech
Recent articles, papers and books from the US have made the case that AI has a right to free speech.
Corporations, like AI systems, are not people. Yet the US supreme court has ruled that government should not suppress corporations’ political speech. This is because the first amendment protects Americans’ freedom to think for themselves.
Free thought, says the US supreme court, requires us to hear from “diverse and antagonistic sources”. The US government telling people where to get their information would be an unlawful use of “censorship to control thought”. So corporations’ free speech is believed to create an environment where individuals are free to think.
The same principle could extend to AI. The US supreme court says that protecting speech “does not depend upon the identity of its source”. Instead, the key criterion for protecting speech is that the speaker, whether an individual, corporation or AI, contributes to the marketplace of ideas.
AI and misinformation
Yet, an unthinking application of free speech law to AI could be damaging. Giving AI free speech rights could actually harm our ability to think freely. We have a term, sophist, for those who use language to persuade us of falsehoods. While AI super-soldiers would be dangerous, AI super-sophists could be much worse.
An unconstrained AI might pollute the information landscape with misinformation, flooding us with “propaganda and untruth”. But punishing falsehoods could easily stray into censorship. The best antidote to AI’s falsehoods and fallacies could be more AI speech that counters misinformation.
AI could also use its knowledge of human thinking to systematically attack what makes our thought free. It could control our attention, discourage pause for reflection, pervert our reasoning, and intimidate us into silence. Our minds could therefore become moulded by machines.
This could be the wake-up call we need to spur a renaissance in human thinking. Humans have been described as “cognitive misers”, which means we only really think when we need to. A free-speaking AI could force us to think more deeply and deliberately about what is true.
However, the huge quantities of speech that AI can produce could give it an oversized influence on society. Currently, the US supreme court views silencing some speakers to hear others better as “wholly foreign to the first amendment”. But restricting the speech of machines might be necessary to allow human speech and thought to flourish.
Proposed regulation of AI
Both free speech law and AI regulation must consider their impact on free thought. Take the European Union’s draft AI act and its proposed regulation of generative AI such as ChatGPT.
Firstly, this act requires AI-generated content to be disclosed. Knowing content comes from an AI, rather than a person, might help us evaluate it more clearly – promoting free thought.
But permitting some anonymous AI speech could help our thinking. AI’s owners may experience less public pressure to censor legal but controversial AI speech if such speech was anonymous. AI anonymity could also have the effect of making us judge AI speech on its merits rather than reflexively dismissing it as “bot speech”.
Secondly, the EU act requires companies to design their AI models to avoid generating illegal content, which in those countries includes hate speech. But this could prevent both legal and illegal speech being generated. European hate speech laws already cause both legal and illegal online comments to be deleted, according to a think tank report.
Holding companies liable for what their AI produces could also incentivise them to unnecessarily restrict what it says. The US’s section 230 law shields social media companies from much legal liability for their users’ speech, but may not protect AI’s speech. We may need new laws to insulate corporations from such pressures.
Finally, the act requires companies to publish summaries of copyrighted data used to train (improve) AI. The EU wants AI to share its library record. This could help us evaluate AI’s likely biases.
Yet humans’ reading records are protected for good reason. If we thought others could know what we read, we might be likely to shy away from controversial but potentially useful texts. Similarly, revealing AI’s reading list might pressurise tech companies not to train AI with legal but controversial material. This could limit AI’s speech and our free thought.
Thinking with technology
As Aza Raskin from the Center for Humane Technology points out, threats from new technologies can require us to develop new rights. Raskin explains how the ability of computers to preserve our words led to a new right to be forgotten. AI may force us to elaborate and reinvent our right to freedom of thought.
Moving forward, we need what the legal scholar Marc Blitz terms “a right to think with technology” – freedom to interact with AI and computers, using them to inform our thinking. Yet such thinking may not be free if AI is compelled to be “safe … aligned … and loyal”, as tech experts recently demanded in a petition to pause AI development.
Granting AI free speech rights would both support and undermine our freedom of thought. This points to the need for AI regulation. Yet such regulatory action must clearly show how it complies with our inviolable right to freedom of thought, if we are to remain in control of our lives.
This article was originally published in The Conversation and is republished here with permission.