Larry Cuban on School Reform and Classroom Practice: What the History of Supermarkets Teaches Us About AI in Schools (Andrew Cantarutti) (Guest Post by Andrew Cantarutti)
Andrew Cantarutti has taught in Canadian public and private schools for over a decade. Being a writer and high school teacher is highly demanding but he has carried it off well. He writes at: “The Walled Garden Education.“ He published this piece October 7, 2025.
The modern supermarket began, oddly enough, with a single store in Memphis, Tennessee. In 1916 Clarence Saunders opened Piggly Wiggly: the first self-service grocer. Before then, shopping meant visiting the butcher, the baker, and the greengrocer separately. Saunders’ innovation was simple and radical: let customers pick their own goods off the shelves. It saved time, cut costs, and felt liberating. Almost overnight the idea caught on and similar self-service stores popped up across the country.
That convenience unlocked further innovations. Refrigeration and cold logistics — the so-called “cold chain” — expanded rapidly, allowing “fresh” fruit and leafy greens on supermarket shelves year-round instead of only in season. What began as an advance in service soon became infrastructure: refrigerated warehouses, temperature-controlled shipping, and an appetite for predictable, year-round supply. Those systems let supermarkets scale and centralize supply chains, but they also shifted costs and consequences off the shopping floor and into the global environment. PBS’s recent reporting on supermarkets and refrigeration summarizes the scale plainly: the cold chain already accounts for 8% of electricity use globally, and that footprint is a serious climate problem. Nicola Twilley (author of Frostbite: How Refrigeration Changed Our Food, Our Planet, and Ourselves) put it bluntly in her reporting on the subject:
“If the rest of the world builds a U.S.–style cold chain … there won’t be a harvest to store in it.”
Those infrastructural shifts changed diets and health. The supermarket’s capacity to stock highly processed, shelf-stable products, engineered for taste, convenience, and long shelf life, reshaped eating habits. A well-established body of epidemiological research now links high consumption of ultra-processed foods to a range of poor health outcomes: cardiovascular disease, obesity, diabetes, and more. In other words, the convenience of packaged food came with demonstrable, population-level costs.
The ripple effects reached across borders and into labour systems. Global demand for inexpensive seafood helped create vast, efficient supply chains that cut costs by offshoring production and squeezing margins. Investigations and human-rights reports have repeatedly documented forced labour and other serious abuses in parts of the Southeast Asian seafood and shrimp industries — problems that are inextricably linked to the global appetite supplied by large retailers. In short: lower prices and year-round availability in the supermarket aisle are not magically produced; they’re the endpoint of complex, often exploitative supply chains.
The through-line is straightforward and, sadly, predictable. Each convenience — self-service, refrigeration, global sourcing, processed food — solved a short-term problem for shoppers and firms. Each also created new dependencies: on energy-hungry infrastructure, on centralized logistics, on large corporate suppliers, and on production practices that externalize environmental and human costs. The lesson is fairly simple: what looks like a small convenience in the present can, when scaled and institutionalized, reshape entire ecosystems in ways that are costly to reverse.
So what does this have to do with artificial intelligence in schools?
AI arrives promising the same kinds of immediate wins supermarkets once offered. It promises automated grading, instant feedback, individualized practice, and teaching aids that can ostensibly reduce teacher workload and personalize learning at scale. These promises are attractive — who wouldn’t want better feedback and more time to teach? But the supermarket analogy compels us to look beyond convenience.
Three linked risks emerge from the comparison.
- Environmental impact is not negligible.
Large modern AI models, and the data centres that host them, consume substantial electricity for both training and inference. Peer-reviewed work first raised the alarm several years ago about the carbon cost of training large models, and follow-up studies and commentaries have made the point more urgent: AI’s energy demands are real, yet measurement and transparency remain uneven. Recent reporting and research suggest that the footprint of large models and the data centres that serve them could become even more significant as their adoption scales. We should treat institutional adoption of energy-intensive tools as not only a pedagogical decision, but an environmental one. - Market entrenchment can erode alternatives.
A handful of powerful vendors already host the most capable models; if school systems quickly adopt a narrow range of commercial AI tools, they risk creating the same vendor concentration supermarkets created for food supply. That entrenchment narrows the field for smaller pedagogical innovators and shapes classroom practice around the affordances and incentives of a few corporate platforms. Once students, teachers, and assessment systems become organized around a particular AI workflow, rolling back or diversifying becomes costly, both technically and culturally. That dynamic is quietly as consequential as any technical limitation. - Negative externalities and second-order harms are likely and sometimes delayed.
Supermarkets altered diets in ways that only became obvious decades later; AI may change cognitive practices — revision habits, problem-solving persistence, and assessment integrity — in ways that are slow to materialize but hard to reverse. Technological dependence, novel forms of cheating, bias and fairness issues, and subtle changes in attention are plausible harms that are more easily prevented than undone. The risk is not that AI will do nothing good; the risk is that institutional endorsement before harms are understood will entrench those harms.
From those risks follow three practical takeaways for schools.
1. Treat AI adoption as an environmental as well as a pedagogical decision.
Ask vendors to disclose energy use and commit to sustainability practices before procurement. Prefer lightweight solutions, local-oriented models, or hosted options with clear, transparent carbon accounting. Schools should insist on that transparency as a condition of adoption.
2. Counter optimism bias with rigorous impact assessment.
New technologies glitter: they promise efficiency, novelty, and competitive advantage. Humans tend to overestimate short-term benefits and underestimate long-term costs. School systems should normalize pre-procurement impact assessments that go beyond functionality and price, explicitly measuring pedagogical effect, equity, privacy, and other likely externalities (including environmental cost). That kind of evidence should drive procurement decisions.
3. A practical rule of fiduciary prudence: delay institutional adoption long enough for evidence to appear.
When major new technologies arrive, there is an asymmetry of risk: early institutional adopters create dependence and expectations among students and parents that are very difficult to reverse. A rule of thumb like “wait at least 12 months before formal classroom endorsement” is not anti-innovation; it is fiduciary. It buys time for independent studies, for vendor transparency, for small-scale pilots to reveal unintended harms. In the case of AI, where misuse can quickly become normalized (cheating, dependence, biased feedback), waiting a year can make the difference between reversible experiment and systemic entrenchment.
A note about risk, prudence, and progress
The supermarket story is not an argument to reject convenience wholesale. Supermarkets solved real problems. But their evolution also shows how immediate gains accumulate into systemic consequences — environmental, economic, and cultural — that were not obvious on day one. AI in education may bring benefits, but the question for school leaders is whether those benefits outweigh long-term costs, and whether we can deploy the technology in ways that preserve alternatives, protect learners, and limit ecological harm.
A modest policy of prudence is not bureaucratic fear; it is responsible stewardship. Schools have a duty to protect learners and the public trust. Our children’s cognitive development, privacy, and the planet’s health are not appropriate variables in a vendor’s marketing plan.
If the supermarket era teaches one clear lesson, it’s this: convenience that scales without constraint becomes hard to unmake. If we adopt AI, we should only do so with evidence, with contract safeguards, and with an eye to the alternatives we risk losing.
This blog post has been shared by permission from the author.
Readers wishing to comment on the content are encouraged to do so via the link to the original post.
Find the original post here:
The views expressed by the blogger are not necessarily those of NEPC.