Skip to main content

Code Acts in Education: Degenerative AI in Education

After months of hype about the potential of artificial intelligence in education, signs are appearing of how technologies like language models may actually integrate into teaching and learning practices. While dreams of automation in education have a very long history, the current wave of “generative AI” applications – automated text and image producers – has led to an outpouring of enthusiastic commentarypolicy activityaccelerator programs and edtech investment speculation, plus primerspedagogic guides and training courses for educators. Generative AI in education, all this activity suggests, will itself be generative of new pedagogic practices, where the teacher could have a “co-pilot” assistant guiding their pedagogic routines and students could engage in new modes of studying and learning supported by automated “tutorbots”.

“Every child will have an AI tutor that is infinitely patient, infinitely compassionate, infinitely knowledgeable, infinitely helpful,” wrote venture capital investor Marc Andreessen in one of the most hyperbolic examples of recent AI boosterism. “The AI tutor will be by each child’s side every step of their development, helping them maximize their potential with the machine version of infinite love.”

But what if, instead of being generative of educational transformations, AI in education proves to be degenerative—deteriorating rather than improving classroom practices, educational relations and wider systems of schooling? The history of technology tells us that no technology ever works out entirely as intended, and neither is it just a neutral tool to achieve beneficial ends. AI, like all tech, has longer historical roots that shape its development and the tasks it is set to perform, and often leads to unanticipated and sometimes deleterious consequences. Current instantiations of AI are infused with a kind of politics that applies technical and market solutions to all social problems.

While appreciative of some of the current creative and critically-informed thinking about generative AI in education, questions remain to be addressed about its possible long-term impacts. Important work on the ethicsharms and social and environmental implications of AI in education has already appeared, and it should remain a topic of urgent deliberation rather than constrained by narrow claims of inevitability. Here I explore the idea of degenerative AI in education from three directions: degenerative AI as (1) extractive experimental systems, (2) monstrous systems, and (3) scalable rent-seeking systems.

Extractive experimental systems

Many generative AI applications in education remain speculative, but one prototypical example of an automated tutorbot has already started being introduced into schools. The online learning non-profit Khan Academy has built a personalized learning chatbot called Khanmigo, which it claims can act as a tutor for students and an assistant for teachers. The New York Times journalist Natasha Singer reported on early pilot tests of Khanmigo in a US school district, suggesting the district “has essentially volunteered to be a guinea pig for public schools across the country that are trying to distinguish the practical use of new A.I.-assisted tutoring bots from their marketing promises”. The results appear mixed.

The framing of the Khanmigo trials as pilot tests and students as guinea pigs for the AI instructive. Recently, Carlo Perrotta said educators appear to be joining a “social experiment” in which the codified systems of education — pedagogy, curriculum and assessment — are all being reconfigured by AI, demanding laborious efforts by educators to adjust their professional practices. This pedagogic labour, Perrotta suggested, primarily means teachers helping generative AI to function smoothly.

The Khanmigo experiment exemplifies the laborious efforts demanded of teachers to support generative AI. Teachers in the NYT report kept encountering problems — such as the tutorbot providing direct answers when teachers wanted students to work out problems — with Khan Academy responding that they had patched or fixed these issues (wrongly in at least one instance).

This raises two issues. The first is the possible cognitive offloading and reduction of mental effort or problem-solving skill that generative AI may entail. AI may exert degenerative effects on learning itself. More prosaically, AI is likely to reproduce the worst aspects of schooling – the standardized essay is already highly constrained by the demands of assessment regimes, and language models tend to reproduce it in format and content.

The second issue is what Perrotta described in terms of a “division of learning”: a term from Shoshana Zuboff denoting a distinction between AI organizations with the “material infrastructure and expert brainpower” to  learn from data and fine-tune models and processes, and the unpaid efforts of everyday users whose interactions with systems flow back into their ongoing development. Elsewhere, Jenna Burrell and Marion Fourcade have differentiated between the “coding elite”, a new occupational class of technical expertise, and a newly marginalized or unpaid workforce, the “cybertariat”, from which it extracts labour. In the Khanmigo case, Khan Academy’s engineers and executives are a new coding elite of AI in education, extracting the labour of the cybertariat teachers and students in the classroom.   

AI may in the longer term put further degenerative pressure on classroom practices and relations. It both demands teachers’ additional unpaid labour and extracts value from it. AI and other predictive technologies may also, as Sun-ha Hong argues, extract professionals’ discretionary power, reshaping or even diminishing their decision-making and scope for professional judgment. In the experimental case of Khanmigo, the teacher’s discretionary power is at least partially extracted too, or at the very least complicated by the presence of a tutorbot.

Monstrous systems

The Khan Academy experiment is especially significant because Khanmigo has been constructed through an integration with OpenAI’s GPT-4. Ultimately this means the tutorbot is an interface to OpenAI’s language model, which enables the bot to generate personalized responses to students. There are even reports the partnership could be the basis of OpenAI’s plans to develop an entire OpenAI Academy—a GPT-based alternative to public education. 

OpenAI’s Sam Altman and Salman Khan both tend to justify their interests in personalized learning tutoring applications by reference to an influential education research study published in 1984—Benjamin Bloom’s 2 sigma problem. This is based on the observation that students who receive one-to-one tutoring score on average 2 standard deviations higher on achievement measures than those in a conventional group class. The problem is how to achieve the same results when one-to-one tutoring is “too costly for most societies to bear on a large scale”. Bloom himself suggested “computer learning courses” might “enable sizeable proportions of students to achieve the 2 sigma achievement effect”.

At a recent talk on the “transformative” potential of new AI applications for education, Sam Altman claimed:

The difference between classroom education and one-on-one tutoring is like two standard deviations – unbelievable difference. Most people just can’t afford one-on-one tutoring… If we can combine one-on-one tutoring to every child with the things that only a human teacher can provide, the sort of support, I think that combination is just going to be incredible for education.

This mirrored Khan’s even more explicit invocation of Bloom’s model in a recent TED Talk on using AI to “save education”, where he also announced Khanmigo.

Bloom’s model emphasizes an approach to education termed “mastery learning”, a staged approach that includes rounds of instruction, formative assessment, and feedback in order to ensure students have mastered key concepts before moving on to the next topic. For entrepreneurs like Altman and Khan, the 2 sigma achievement effect on mastery learning is considered feasible with personalized learning tutorbots that can provide the required one-to-one tuition at low cost and large scale.

Marie Heath and colleagues have recently argued that educational technology is overly dominated by psychological conceptions of individual learning, and therefore fails to address the social determinants of educational outcomes or student experiences. This individual psychological approach to learning is only exacerbated by AI-based personalized learning systems based on notions of mastery and its statistical measurement. The aim to achieve the 2 sigma effect also reflects the AI industry assumption that human intelligence is an individual capacity, which can therefore be improved with technical solutions — like tutorbots — rather than something shaped by educational policies and institutions.

Moreover, the personalized learning bots imagined by Khan, Altman, and many others, are unlikely to function in the neat streamlined way they suggest. That’s because every time they make an API call to a language model for content in response to a student query, they are drawing on vast reserves of information that are very likely polluted by past misinformation or biased and discriminatory material, and possibly may become even more so as automated content floods the web. As Singer put it in the NYT report,

Proponents contend that classroom chatbots could democratize the idea of tutoring by automatically customizing responses to students, allowing them to work on lessons at their own pace. Critics warn that the bots, which are trained on vast databases of texts, can fabricate plausible-sounding misinformation — making them a risky bet for schools.

Like all language models, tutorbots would be “plagiarism engines” scooping up past texts into new formations, and possibly misinformation as a substitute for authoritative curriculum materials.

Perhaps more dramatically, the sci-fi writer Bruce Sterling has described language models as “beasts”. “Large Language Models are remarkably similar to Mary Shelley’s Frankenstein monster”, Sterling wrote, “because they’re a big, stitched-up gathering of many little dead bits and pieces, with some voltage put through them, that can sit up on the slab and talk”.

These monstrous misinfo systems could lead to what Matthew Kirschenbaum has termed a “textpocalypse” — a future overrun by AI-generated text in which human authorship is diminished and all forms of authority and meaning are put at risk. “It is easy now to imagine a setup wherein machines could prompt other machines to put out text ad infinitum, flooding the internet with synthetic text devoid of human agency or intent: gray goo, but for the written word,” he warned.

It’s hard to imagine any meaningful solution to Bloom’s 2 sigma problem when mastering a concept could be jeopardized by the grey goo spat back at a student by a personalized learning tutorbot. This may even become worse as language models themselves degenerate through “data poisoning” by machine-generated content. AI is, however, potentially valuable in other terms.

Scalable rent-seeking systems

One of the most eye-opening parts of Natasha Singer’s report on Khanmigo tests in schools is that while the schools currently trialing it are doing so without have to pay a fee, that will not remain so after the pilot test. “Participating districts that want to pilot test Khanmigo for the upcoming school year will pay an additional fee of $60 per student”, Singer reported, noting Khan Academy said that “computing costs for the A.I. models were ‘significant’”.

There appear to be symmetries here with OpenAI’s business model. OpenAI sought first-mover advantage in the generative AI race by launching ChatGPT for free to the public late in 2022, before later introducing subscription payments. Sam Altman described its operating costs as “eye-watering”. Another monetization strategy is integrating its AI models with third party platforms and services. With Khanmigo integrated with GPT-4, it appears its “significant” computing costs are going to be covered by charging public schools the $60 per-student annual subscription charge.

In other words, Khan Academy appears to be developing a rent-seeking business model for Khanmigo where schools help defray the operating costs of AI models. It reflects the growing tendency in the education technology industry towards rentiership models of capitalization, where companies exact economic rent from educational institutions in the shape of ongoing subscriptions for digital services, and can derive further value from extracting data about usage too.

However, schools paying rent for Khanmigo, according to Singer’s report, may not be a viable long-term strategy.

Whether schools will be able to afford A.I.-assisted tutorbots remains to be seen. … [T]he financial hurdles suggest that A.I.-enhanced classroom chatbots are unlikely to democratize tutoring any time soon.

Mr. Nellegar, Newark’s ed tech director, said his district was looking for outside funding to help cover the cost of Khanmigo this fall.

“The long-term cost of A.I. is a concern for us,” he said.

In a current context where schools seem increasingly compelled to try out new automated services, institutions paying edtech rent could lead to new financial and fundraising challenges within the schooling sector, even as it bolsters the market value of AI in education. The Khanmigo example seems to indicate considerable diversion of public funds to defray AI computing costs, potentially diverting schools’ funding commitments towards technological solutions at the expense of other services or programs. In this sense, AI in education could affect schools’ capacity for other spending at a time when many face conditions of austerity and underfunding.  

There is a paradox between these financial arrangements and the personalized learning approach inspired by Bloom. As Benjamin Bloom himself noted, one-to-one tutoring at scale is too expensive for schools systems to afford. AI enthusiasts have routinely suggested personalized learning platforms, like Khan Academy, could solve this scalability problem at low cost. But as Khanmigo makes clear, scalable personalized tutorbots may themselves be a drain on the public financial resources of schools. As such, AI may not solve the cost problem of large-scale individual tutoring, but reproduce and amplify them due to the computing costs of AI scaling. Putting pressure on the finances of underfunded schools is, as Singer pointed out, unlike to democratize tutoring, but it may be route to long-term income for AI companies.

Resistant and sustainable systems

Generative AI may yet prove useful in some educational contexts and for certain educational tasks and purposes. Schools may be right to try it out — but cautiously so given it has only predicted rather than proven benefits. It remains important to pay attention to its actual and potential degenerative effects too. Besides the degenerative effects it may exert on teachers’ professional conditions, on learning content, and on schools’ financial sustainability, as I’ve outlined, AI also has degenerative environmental effects and impacts on the working conditions of the “hidden” workers in the Global South who help train generative models.

This amounts to what Dan McQuillan calls “extractive violence”, as “the demand for low-paid workers to label data or massage outputs maps onto colonial relations of power, while its calculations demand eye-watering levels of computation and the consequent carbon emissions” that threaten to worsen planetary degradation. Within the educational context, similarly, “any enthusiasms for the increased use of AI in education have to reckon with the materiality of this technology, and its deleterious consequences for the planet” and the workers hidden behind the machines.

The hype over AI applications such as language models “is stress-testing the possibility of real critique, as academia is submerged by a swirling tide of self-doubt about whether to reluctantly assimilate them”, continues Dan McQuillan. Resignation to AI as an inevitable feature of the future of education is dangerous, as it is likely to lock education institutions, staff and students into technical systems that could exacerbate rather than ameliorate existing social problems, like teacher over-work, degradation of learning opportunities, and school underfunding.

McQuillan’s suggestion is to resist current formations of AI. “Researchers and social movements can’t restrict themselves to reacting to the machinery of AI but need to generate social laboratories that prefigure alternative socio-technical forms”. With the AI models of OpenAI currently being tested in classroom laboratories through Khan Academy, what kind of alternative social laboratories could educators create in order to construct more resistant and sustainable education systems and futures than those being configured by degenerative AI?

 

This blog post has been shared by permission from the author.
Readers wishing to comment on the content are encouraged to do so via the link to the original post.
Find the original post here:

The views expressed by the blogger are not necessarily those of NEPC.

Ben Williamson

Ben Williamson is a Chancellor’s Fellow at the Centre for Research in Digital Education and the Edinburgh Futures Institute at the University of Edinburgh. His&nb...