Skip to main content

Critical Studies of Education & Technology: We Need to Talk About AI in Terms of Values, Not Vibes

The current AI hype bubble is being driven by little more than emotions. Instead of being awed by speculative promises around AI futures, we need to engage with current forms of AI in terms of their values and politics. Above all, we need to step up the fight for new democratised forms of AI governance and development.

The AI hype that has dominated the past few years has been fuelled primarily by emotions. Early on, many people were seduced by the initial wow factor of seeing GenAI tools spit out credible looking text and eerily realistic images. This was followed by panics around jobs being replaced by AI, and students cheating their way through degrees. More recently,  public discourse has split into distinct camps – some people continuing to be thrilled that ‘this changes everything’ and others leaning into what Wired recently described as ‘generalized animosity towards AI’. Thus, while many people are still holding out for an existential reboot in the shape of AGI or the singularity, a mood of despondency and disenchantment is also building – led by others feeling increasingly unnerved by the rise of deep fakes and disinformation, exhausted by AI slop, and alarmed about the environmental costs of data centres. 

So, nearly three years after the launch of ChatGPT-4 any answers to the pressing challenge of where society needs to go next with AI still seems to depend on how we feel about AI. In the absence of any clear-cut use cases, support for AI remains mostly a matter of faith (rather than a matter of fact), sustained by popular beliefs around where AI innovation might be taking us in the future. In contrast, opposition against AI seems to be driven mostly by fears around what is being lost and/or frustrations around the mid-ness of it all.

None of this provides an adequate basis from which to collectively figure out what forms of AI we might want to have (and not have) in our societies. The emotions swirling around the topic of AI are largely reactive – distracting people from the fact that they can be proactively involved in deciding what forms of AI are developed and for what ends. In particular, framing AI in terms of personal hunches, hopes, fears and feelings is a convenient way of distracting publics and their policymakers from the corporations, financiers and tech cabals currently driving the development of this technology for their own ends.

AI is not simply a vibe but something that is already having a tangible impact on everyday lives and everyday institutions. As such, it is crucial that AI becomes a matter of democratic deliberation around the actual outcomes of this technology, rather than a Big Tech free-for-all that the majority of the population feel somewhat awestruck by and powerless to effect. Continuing to frame AI simply in terms of emotions is stopping the society-wide conversations, collective deliberations and consensual agreements around AI that urgently need to take place. AI is something that affects us collectively. We need to start engaging with AI as a matter of values (and therefore a matter of politics). In short, AI is a matter of what we collectively believe is right, and what we collectively believe is wrong … and therefore something that requires much deliberation.

All of this points to the need to reframe AI as a normative issue – a focus for debate, discussion and dissensus. For example, the prospect of having AI replace teachers is not simply something that should be driven by a few people’s enthusiasm for usurping traditional ways of schooling and feared by many others as inevitably leading to a second-rate education. Instead, the role that AI plays in education should be steered by our collective values around education – what we as a society believe education is for, and what we as a society believe education should be in the future. The same goes for AI in journalism, healthcare, law enforcement and every other area of life that is currently being pitched as ripe for transformation through AI.

Of course, talking about digital technology in terms of values is not something that most people are accustomed to doing (or actively encouraged to do). The tech industry takes great pains to hype their products as either inherently useful and a force ‘for good’ in the world, or simply as neutral tools that can be used in beneficial ways. The general public (aka ‘end users’) are encouraged to see technology development as driven by rational scientific objectivity rather than messy political struggle.

This is, of course, nonsense. The growth of the field of artificial intelligence over the past seventy years has been driven by a very particular set of values, assumptions and norms. AI is built around the valuing of efficiency, precision, calculability, optimisation, predictability, the statistical modelling of complex phenomenon, the need for approximation, an acceptance of statistical errors and bias. These values tend to fit neatly with the mind-sets and beliefs of many computer scientists and engineers but bump up hard against many other standpoints – not least the messy ways in which most people actually experience the social world.

Similarly, the development of AI over the past seventy years has played out in ways that also reflect a very particular set of agenda and interests. There is, for example, a rich history of AI R&D being bankrolled by military and security agency funding. It is therefore no coincidence that AI is proving to fit neatly into contexts and regimes that are authoritarian, surveillant and manipulative. It is also no coincidence that AI corresponds with elitist ideological thinking around human engineering and new eugenics. Neither should we be surprised that the dominant AI business model is based around extractive logics, exploitative practices, and an obsession with growth and scalability at all costs. This is an industry unconcerned about exploiting low-paid labour in the global south, causing massive environmental harms, or stealing the intellectual property of human creators.

In short, the forms of AI that we are currently facing constitute a technology (like all technologies) with a distinct set of values and politics.  The necessary response is not simply to be thrilled and/or fearful of what might happen next. Instead, the necessary response is to hold the current AI moment up to democratic scrutiny and democratic account. Citizens and communities need to audit, challenge and have the final say on what forms of AI are implemented in their everyday lives, everyday institutions and wider societies. Alternate approaches to developing AI need to be encouraged, along with entirely different modes of governance, stewardship and oversight. 

This requires us all getting involved in public conversations about AI in terms of values, agendas, interests, and what we believe to be good, right, desirable and preferable. This requires us to see AI as something to be contested, challenged, and struggled over. After seventy years of AI development, we should now feel comfortable in arguing over how we feel this technology could be improved, reimagined or banned outright. Particularly when the world is tipping over into an era of advancing climate collapse, ecological breakdown and societal instabilities, AI cannot simply continue to be something that runs on vibes rather than values. AI is something that needs to be contested and reimagined. Other flavours and forms of AI are possible – as is wholly resisting and rejecting AI altogether.

So, there is much work to be getting on with. We need to take seriously recent work around democratising and diversifying AI development and supporting the building of different forms of AI – e.g. participatory data stewardship, publicly-driven dataset creation and model-sharing, public-funded access to tech infrastructure, compute and assistive tools that allow non-experts to create their own AI applications. We also need to take seriously recent work around democratising AI governance. This includes efforts to hold AI to account through legislation, regulation and multilateral standards, as well as deliberative democracy approaches to scrutinising AI – such as citizens assemblies, participatory budgeting and community hackathons.

But perhaps most important is sustaining society-wide discussions around the politics of AI. So, what values, what politics, what agenda do we want AI to have … and how can we ensure this happens? What alternative values do we want to see shaping the AI that we have in our lives … values such as care, kindness, compassion, uncertainty, solidarity, a respect for people, a respect for nature? Might we want to push for forms of AI based around priorities of degrowth (rather than growth), or reframe ideas of efficiency in terms of minimal use of resources or minimised environmental harm? What alternative voices need to be driving these discussions? What leads can be taken from growing movements around  Indigenous AIfeminist AIBlack AIAfrikan AIqueer AI and others?

These are all difficult conversations for which there are no clear answers. Nevertheless, these are precisely the kinds of value-driven deliberations that are needed as the AI hype bubble begins to collapse, and as the full extent of AI harms is becoming apparent. The future development and roll-out of AI is not something that we should simply feel thrilled or fearful about … we need to all start taking this technology much more seriously.

 

This blog post has been shared by permission from the author.
Readers wishing to comment on the content are encouraged to do so via the link to the original post.
Find the original post here:

The views expressed by the blogger are not necessarily those of NEPC.

Neil Selwyn

Neil Selwyn is a Professor in the Faculty of Education at Monash University in Australia. He has worked for the past 28 years researching the integration of digit...