AnOpenLetterToAcademicAISceptics
ThoughtStorms Wiki
AI is a huge challenge for education.
Bad things will come from it. Largely not because of AI itself, but because it will be cover for other malfeasances, such as humans abdicating quality control over AI output too quickly, in a rush to save money.
But after reading some recent complaints about AI for education, I went back to read Clay Shirky on MOOCs, back when "teh interwebs" was the big threat to higher education.
And there are good parallels. Particularly the one about how we are inclined to compare the new thing unfavourably to the old thing at its best, not to the old thing at the mediocre average. That's particularly true of AI. Where bad AI is compared to good humans, not to bad or average humans.
Higher education institutions have failed dismally to adapt to the internet. That's why good knowledge is locked up in expensive journals and stultifyingly boring formats like academic papers, while the web is awash with fake news and nonsense in easily accessible and digestible chunks.
That isn't the fault of the web. It's the fault of the journals and universities trying to hoard "good" knowledge to themselves and their paying customers rather than trying to maximise the wider dissemination and public understanding of good knowledge. It's the fault of academics who have failed to disentangle their ability to judge the content of a piece of work from the form of that piece of work and continue to reward the right form over the right content.
Faced with the internet, academia, for all its alleged intellectual abilities, has utterly failed to understand and adapt itself to this world. It has failed to maintain its authority. Failed to keep the wider population accurately informed. Failed to protect the public sphere from damaging lies and obvious fraudulence.
I'm being harsh here. But not hostile. I love academia and I wish it had done better. I hope it does figure out how to do better. Even this late in the day.
AI will be an even bigger challenge. And I will again urge everyone who cares about education and the academy to take it seriously. And proactively engage and figure out how to take control of AI rather than let it sideline and destroy you.
The first thing to understand is that chatbots are going to replace books. And "conversation" is going to replace reading passive texts. Conversation and active questioning are such obviously better experiences for an engaged and curious student - compared to reading long inert tracts - that there will be a strong tendency towards chatbots as the preferred medium.
If you are horrified by this idea. And believe that obviously books contain better knowledge than bots. And that the discipline of reading long texts is more important than the discipline of good questioning. Then j'accuse. This is on you. The absolute epistemic catastrophe that is coming will be YOUR FAULT.
Because if you believe, as I do, that we need "good knowledge" more than ever. Then it's incumbent on all of us to ensure that the bots DO have the best knowledge. And that students know how to elicit it through sophisticated questioning.
Instead of sitting around complaining about attempts by OpenAI and Anthropic to muscle in on their territory, universities and responsible academics should make it their number one priority to ensure they have their own LLM-backed chatbots.
A couple of years ago when I started advocating this, it was a big ask. But in an age of RAG and increasingly powerful open-source language models. And commodity cloud hosting for training and fine-tuning. And small language models and Mixture of Experts and MCP tools etc. etc. Then any reasonably solvent and competent university has the resources necessary to put its academic knowledge into the form of chatbots.
The challenge is quality control. To make sure that those chatbots represent you accurately. To ensure when they produce answers to questions, they produce correct knowledge, align with your values, help the asker improve their own knowledge and to explore the most relevant information.
Does that seem hard? Well every middling size company which is racing to automate customer service, and put its corporate manuals into RAG behind a chatbot, is currently struggling with the same thing. And over time, they will get better at it. Whereas YOU, you are a university. You claim to be one of the highest concentrations of smart people around. It's literally meant to be your "core competence" to accurately instil knowledge in the minds of others. And to ensure, through testing, that the student has acquired that knowledge accurately. When you hand out a qualification you are literally saying "We guarantee that another human being, despite human foibles, has acquired competency in working with and representing a bunch of true ideas from our big collection of true ideas". I mean, as a university, that's your day-job, right?
So what does it mean for a university, or the academic profession, to throw up its hands and say. "Yeah, well, this is beyond us. Making sure a language model has competency working with our big stock of true ideas? To give a guarantee of that competency? You can't possibly expect us to do that."
Seriously? We can't?
You may feel bitter and acerbic about AI. You may be obsessing about its flaws and failures. But the longer term trend of AI is that it IS getting better. We achieved that through, yes, more data. But also more checks and balances. More human feedback. More monitoring. SOMEONE is going to figure out how to make an LLM-backed chatbot that accurately reflects their world model and values.
And if you love academia, you have to hope and work towards making sure that it's the universities who get there first. Or at least not too late to the party to be irrelevant.
It's corny to quote Marx's "the point, however, is to change it" But if your attitude to AI is a "resistance" that involves going around kvetching about all the flaws you interpret in it, you just lost. We are at a pivotal moment when capital and whatever alternative to capital the university still represents, are fighting over control of the information and pedagogical environment of the future. At least give yourselves a chance. At least TRY to win that battle. Please!
If I ran a university today, here's how I would be thinking :
"We need to put this institution on a war footing. We are besieged by enemies :
- purveyors of disinformation who want to discredit us and replace our true knowledge with their lies
- big tech who want to co-opt us
- alt education who will undercut us
We need to re-establish "
No Backlinks