
Selected publications
- Straub, V. J. & Bright, J. Unite the study of AI in government: with a shared language and typology. AI Soc. 40, 1531–1532 (2025).
- Straub, V. J. We need a new language to talk about AI that highlights its machine nature. Apolitical (2024).
- Straub, V. J. It’s time we involve citizens in the AI revolution. Sage Methods Research Community (2020).
Unite the study of AI in government: with a shared language and typology
This article was originally published in AI&SOCIETY on 7 May 2024.
Abstract
With the arrival of generative Artificial Intelligence (AI), we have arguably entered a brave new world of opportunities and risks in technology adoption—and public institutions cannot afford to misstep. Although the emerging study of AI in government has advanced insights into important topics such as algorithmic aversion and bias, we believe several weaknesses undermine the field’s validity and policy usefulness. As a result, it is high time scholars and policy advocates across social and technical fields unite so that we realize the tremendous potential AI holds to build resilience in government—and wider society. Here, we offer some thoughts on the issues in current research practice as we see them, and provide a few suggestions for how social scientists and AI scholars might overcome them by adopting a shared language and typology.
The full article is available here.
We need a new language to talk about AI that highlights its machine nature
This article was originally published in apolitical on 30 April 2024.
With the increasing sophistication and adoption of artificial intelligence (AI), government leaders need to adopt a more accurate vocabulary to stop the spread of false narratives
-
The problem: With the increasing sophistication and adoption of AI, government leaders need to adopt a more accurate vocabulary to stop the spread of false narratives.
-
Why it matters: The history and development of terms used to talk about AI aligns with institutional privilege and confers authority; terms set the agenda and guide policy: we should aim for accurate and informative language that can help inform us all.
-
The solution: One approach forward is championing a glossary of universally accepted AI terms which incorporates more terms that better highlight the machine nature of AI models; in our work at the Alan Turing Institute, the UK’s national institute for AI, we have been actively working on this problem.
As the third full year of the UK Government’s national AI Strategy is well underway, recent advancements in the capabilities of AI systems continue to grab the headlines. The latest example continues to be Large Language models (LLMs) like ChatGPT, which derive their power from vast amounts of language data.
Although LLMs can be adapted to an impressive range of tasks, the language used in government, as well as business and the media, to discuss them is often taken out of context, confused and at worst, akin to the effects of misinformation.
💬 Let the author know what language you are currently use when communicating about AI and what alternatives you suggest by leaving a comment below
The latest of many examples is the word ‘hallucinate’, which Cambridge Dictionary named word of the year 2023. In simple terms, this term refers to the tendency of programs like ChatGPT to make errors and provide made-up or false information.
Yet, saying that a chatbot is hallucinating makes it sound like it has a mind of its own that sometimes imagines things. As Giada Pistilli, principal ethicist at Hugging Face, which hosts many state-of-the-art AI models, has noted, the anthropomorphising language of hallucination obscures what AI models are really doing. A more accurate term may simply be ‘malfunction’.
Philosophically fraught words Many policymakers might be forgiven for adopting terms promulgated by researchers without knowing that many of these terms are themselves contested. Fairness can mean different things depending on which researchers you ask, for instance, but is regularly invoked in discussions about AI.
Half a century ago, when AI was still largely the purview of science fiction, the consequences of language were smaller. Now words really matter.
However, as many politicians and the wider public still struggle to know what to make of AI and how to separate AI fact from fiction, the language we use clearly matters.
As esteemed computer scientist Murray Shanahan has written, we need to avoid “the misleading use of philosophically fraught words associated with human cognition to describe the capabilities of LLMs”, words such as ‘belief’, ‘understanding’ and — perhaps the most uninformative of all — ‘consciousness’.
Others have already stressed this point and argued that we need to stop talking about LLMs in a way that we talk about humans. With all the excitement and fear surrounding AI, especially LLMs, we must be measured.
We need to know what this technology can and can’t do, what risks it poses, so that we can both have a deeper understanding and a more comprehensive account of its societal impact. A vocabulary that stresses the machine nature of AI models is the vital first step towards these two goals.
Where to go from here?
Over the last few years, a number of useful terms to discuss LLMs have, of course, gradually entered the lexicon. These include algorithmic bias and interpretability, among others. However, these terms are often relegated to the side-lines, grouped in ‘ethical concerns’ and discussed after the fact. So where do we go from here?
One approach forward is for the government and international organisations to champion a glossary of universally accepted AI terms which incorporates more terms that better highlight the machine nature of AI models.
The EU has made progress in this area by developing a glossary of human-centric AI terms. This can serve as inspiration for the UK Government, which lacks an equivalent.
In our work at the Alan Turing Institute, the UK’s national institute for data science and AI, we are actively thinking about this problem. In a recent paper, we mapped over 100 concepts currently featured in discussions of AI, including interpretability, explainability and oversight. Based on our review, we in turn introduced three of our own new terms to capture the language we think governments should be using.
Other new terms that foreground the artificial nature of AI and encapsulate the entire set of criteria for evaluating AI systems would provide policymakers with a common language to talk about all types of new AI systems and the implications they carry for government and society.
Adopting and promoting new terms may sound like wishful thinking, but it has happened before. Just take the word ‘sustainability’, the origin of which dates back to less than 45 years ago and which has since become a cornerstone of business talk and political debate.
Half a century ago, when AI was still largely the purview of science fiction, the consequences of language were smaller. Now words really matter and we have a chance to get them right. The history and trajectory of AI language aligns with institutional privilege and confers decision-making power. Terms set the agenda and orient progress: we should aspire for holistic and accurate language that can help inform us all.

It's time we involve citizens in the AI revolution
This article was originally published by Sage Research on 9 Jan 2020.
With intelligent machines increasingly playing a role in our daily lives, the public has a right to be informed of the social implications of new technologies.
As the ongoing revolution in robotics and artificial intelligence (AI) disrupts society, it has reignited debate about which principles should guide the development of new technologies, and how we should use them. But although topics like automation and algorithmic bias are now under the public spotlight, there is not enough focus on ensuring that citizens understand how intelligent machines could shape us, and even change what it means to be human altogether. This only risks getting worse if we continue to let industry and academia steer the debate and leave out the public in assessing the social implications of new technology. In response, some are pushing for more transparency in AI research, but that’s not the only measure we should be taking.
The field of AI aims to develop computer systems that can perform tasks we normally associate with human thinking. For example, a program that translates text from one language to another, or a model that identifies diseases from radiologic images; both can be viewed to ‘possess’ some form of artificial intelligence. Robots are often the containers for these systems (AI is the brain and the robot is its body if you will).
The pace at which these technologies are transforming our economy and everyday lives is impressive. But often we don’t stop to ask how these systems actually work; in some cases, they still depend on a largely invisible (often female) data labeling workforce.
More alarmingly, we give little thought to the social consequences of adopting such technologies. Previous technological innovations, like steam power and electricity, have modified the way we live, of course. But so far they have not fundamentally altered what makes us human and what differentiates us from machines—our capacity for love or, more generally, connection, friendship, and empathy. In the age of intelligent machines, this could change.
Now that AI systems are mastering the ability to personalize individual experiences, and with ‘emotional’ companion robots learning to recognize human feelings, our need for human-to-human social interaction may be reduced.
Yet in times of political polarization, it is exactly such interaction that is crucial for fostering love, mutual understanding, and building a cohesive society. As Kai-Fu Lee, the acclaimed AI scientist has pointed out, for all of AI’s promise, ‘the one thing that only humans can provide turns out to be exactly what is most needed in our lives: love’.
A new public-private initiative to involve citizens in understanding the social implications of AI could unite society under the banner of safeguarding core human values whilst improving AI literacy but, what would this look like in practice? To begin, the government could partner with tech to develop an educational curriculum that teaches the technical basics and social implications of AI to all citizens. At the same time, public and private funders of AI research could adopt an agenda that views AI not just as a technological but a social challenge. Both approaches would ensure we develop a stronger grasp of the upsides and potential pitfalls of using new technologies like AI.
This may sound costly and far-fetched, but there are examples that show it is possible. Last year saw the launch of Elements of AI in Finland, a first-of-its-kind online course, accessible to all, that teaches some of the core technical aspects and social implications of AI. Since being developed by the publicly funded University of Helsinki and tech company Reaktor, over 130,000 people have signed up to take the course.
The UK has also begun to make headway in this area. The Royal Society, for example, last year launched a ‘You and AI’ public debate series to build a greater understanding of the ways AI affects our lives, and how it may affect them in the future. Similarly, the RSA brought together a citizens’ jury to deliberate about the ethical use of AI, and earlier this year, innovation foundation Nesta showed how government support and public funding could be used to advance the use of AI tools in schools and colleges. At the University of Oxford, the announcement of a new Institute for Ethics in AI also means students from the arts and humanities will soon be able to study the social implications of AI (although the way this initiative is being funded has drawn significant criticism).
But these are still just small drops in the ocean when compared to the funding flowing into developing better AI technology. Regardless of the form any initiative to understand the social implications of AI takes, what matters now above all is that we get the issue on center stage in the AI debate.
Half a century ago, when AI and robots were still largely the purview of science fiction, the consequences for society were small. Now that both increasingly play a role in our daily lives, every citizen has a stake in the matter. At the start of a new decade, it’s time we demand policymakers think about how the AI revolution can not only grow our economy but strengthen our social bonds and consolidate our democracy.