ThoughtStorms Wiki

So this is a story about me being kind of stupid as a programmer. And what it made me realize about EducationExploded in the age of ChatBotsAsInterface to LanguageModels


It goes without saying that AI is massive. But here's what I'm noticing as a software developer : I am learning things about writing Android from brief chats with gpt, that I struggled with understanding, or being able to use, for years when trying to grok it just by reading the documentation.

In fact, I'm pretty stupid and lazy, but I just discovered I've spent months reinventing things that are already in Jetpack, but I didn't know were there, and I didn't know (how) to search for.

But a few minutes with ChatGPT revealed them and taught me how to use them. (Full story here :

Had I systematically read all the documentation I'd have known this, but of course, I didn't. Not systematically enough.

Chat is completely revolutionising how I can write Android because it turns thickets of incomprehensible docs into immediate answers to questions.

I actually feel like I want to write apps for Android again!!!

Which is huge.

But it also makes me think of all the other stuff I only ever learned superficially because I didn't have patience to learn systematically. How much I don't know because I've tended to "wing it". And guess from first principles and a smattering of ideas.

And I'm probably not the only person in the world like this .

From using Chat to help me understand Android better I suddenly see how this can revolutionise "education".

Because no way am I ever going back to trying to read documents, when I can just ask.

Of course, GPT lies. Even about Android (though probably less there than elsewhere).

LLMs have no distinction between "analytic" and "synthetic" knowledge. (LLMsAnalyticSynthetic)

They can't tell the difference between a useful deduction and bullshitting you by guessing something you'd like to hear.

So it's imperative that we get the world's accurate knowledge into chat form. Because I (and I guess many people) from here on out, am just not going to read books or articles.

Universities and academic publishers have completely fumbled the transition to the internet. That's why the web is full of nonsense and good science is hidden behind pay walls.

Within a year I expect anyone who has a catalogue of writing and knowledge (whether that's publishers, academics, news organizations etc) to be thinking about how to turn it into a validated chat. By customizing an LLM with it. Otherwise they are going extinct, because no one is going to be reading their work in static form.

This is going to happen very fast. If you are a knowledge generating institution, you don't have 30 years to get it wrong the way you did with the web. (AcademiaVsNewMedia)

You have 3 years, max. Either you make your own chat-interface to the knowledge you are willing to stand behind and warrant. So that people can query your knowledge as and how they like.

Or you have to take your chance that Microsoft and Google aren't going to blur it into their generic systems where it might well get mixed up with a whole lot of nonsense and invention based on the epistemic wasteland of the public web.

Universities are in real trouble for a range of reasons. But now the whole teaching model is going to be broken. We are all going to want knowledge on tap via a chat UI. Not in the form of books, or essays. And few lessons. (Though there'll still be a need for guidance in some form.)

Telling students to read static documents won't cut it.

This is existential. Universities need to be getting together and throwing everything into an LLM which is fine-tuned and human-feedbacked, to an "academic standard".

If not, in 10 years, the entire notion of an "academic standard" will be gone.

AcademicPublishers should also be in on this, but I don't expect those parasites to contribute anything.

So it has to be universities. Only universities have the resources and interest in doing this.

And the window for doing it is closing very fast. You've basically got 10 years and then it's over.

LionKimbro then asked :

OK, I'm following what you're saying, I want to add something to the picture- ...Can't remember where exactly I saw this (article re: open source Chatbots-) AIs are being trained from other AIs. And it actually works. It works really well. So who gets paid for publishing info?

To which my response is :

Well the entire academic publishing model is broken by commerce. What should happen is academics are paid to generate knowledge. And then publishing is a byproduct of that.

But instead academics are rewarded for publishing, but are not paid much. While journals use their leverage to squeeze as much rent out of the system as possible.

(I'm helping someone with a paper right now and the journal want £2000 to allow it to be OpenAccess🤦‍♂️)

This is a completely fucked up system that needs to be destroyed.

Before it destroys academia

All the incentives are misaligned. Academics write papers even when they have little to say. OTOH there's no resources or support for writing papers (imagine universities gave academics a statistics checking service etc). Then even when papers are good they are unnecessarily verbose because formats of academic writing haven't even evolved to the web age. (GranularityOfScholarlyWriting)

And behind paywalls. As are references needed to understand them.

The result is a deluge of nonsense online for free. While any nuggets of actual good information are hard to produce, and even harder to access.

Imagine if, instead, we had a system designed to help researchers :

  • discover genuine knowledge
  • review and check it
  • "surface it"
  • and get it out to as many interested people as possible.

Imagine that. Because right now we have the opposite of that. Something designed to encourage even good academics to rush out premature and spammy, but long-winded writing. While keeping the highest quality knowledge as obscure and inaccessible as possible.

However, there's now this new factor : LLM chat is the meteor that's coming for this system. Because based on my experience, I think tolerance for reading texts, as opposed to asking directed questions to a bot, is going to plummet.

And one of two things will happen. Either it will wipe out the academic system altogether. Because the universities won't adapt and they'll die.

Or, and I hope this can happen, the universities do adapt, embrace the change, and reinvent themselves for this paradigm.

What is this paradigm?

It's one where the university prioritises generating good knowledge, and getting it behind a chat interface as quickly as possible. So when people want to know truths about X they can ask the chatbot and get that knowledge in a form that exactly fits, and tracks their trajectory of curiosity.

And this is going to replace most teaching.

So back to your question, who pays, who gets paid?

Universities still have some resources. They're the ones who have to invest in this paradigm and make it work. They're the ones who need to figure out how it works financially. To focus the money and resources not on "publication" or generation of static texts, but to focus the money on good research results which can be fed into the database that feeds the chat UI.

Put it another way ...

In another, private, online discussion I made similar points :

That particular tweet-thread was saying that Chat was a much nicer UI for me compared to reading static documents. And my experience of it being so much nicer leads me to think that I (and other people) will continue to want this, going forward. I've hated trying to write Android in a project I've been on for the last couple of years. Because

  • a) the documentation is huge, online help is full of outdated and deprecated things, and it's massively complex to try to get my head around. And
  • b) because of all that, I've undoubtedly reinvented the wheel several times : ie. implemented stuff myself, laboriously, which the platform already provided. But I didn't know the right terminology for it, so didn't know how to search for it.

Last night I learned and understood more in a single chat about a particular topic with GPT3.5 in about 15 minutes, than I'd been able to figure out by myself reading documentation, in the last two years. Now, yes, I'm stupid / lazy / undisciplined. But even so, that is a dramatic difference.

And, of course, I want everything to be like this now. I know it's possible.

So the challenge is, the technology makes it possible. It's obviously "desirable". But how do we make sure that the Chat/LLMs have good and accurate information rather than bad nonsense? Because we can see that when humanity is forced to choose between easy-to-find-and-assimilate-lies and difficult-to-get-at-truths, we choose the first. And I'm not exempt from this.

So the institutions whose business is truth, ie. the universities, really need to get on top of this. ASAP. Because if not, it will be a disaster.

... this next is in response to a some scepticism about the validity of universities handing down truth ...

As I said in my Tweet thread. The big issue with LLMs is that they have no "analytic / synthetic distinction". They can't tell the difference between a legitimate "deduction" vs making something up that "sounds right".

In the context I've been using them, this often turns up as Chat "hallucinating" libraries that don't exist. It knows that there SHOULD be a library that would be the right solution to my problem. It can deduce what the logical name for such a library would be, and where I'd find it. So that's what it does. It tells me to import the library called X from that location.

The fact that the world doesn't contain such a library is unimportant (or unknown) to it.

Of course, we'll partly solve this by combining LLMs with web-search or other databases. But the LLM still has to figure out which are the situations in which it needs to consult the external world / database vs. the situations in which it can just deduce.

To be fair to the LLM for a minute, this is also something we humans have to do, and sometimes have trouble with. We all have political arguments where we sometimes fill in our blanks by guessing what someone's position is, rather than looking it up. And it's embarrassing to be caught out like that.

But, even if we combine the LLM with a database, we're still then dependent on the accuracy of the database. And if the database is "go and search the open web for something that sounds plausible", then we're in even more trouble, because the web is full of disinformation.

So that's my main point. Yes defining good vs. bad information is context dependent. And we have a crisis now because what consensus we had about the truth has now broken down. (TheEndOfConsensus)

We increasingly all believe different and incompatible things about the world: from the character of politicians to climate science to economic principles to the effectiveness of public health measures.

If anyone has an interest in "alternative facts" about anything, then the web will soon have a lot of passionate advocacy of those positions.

So, my tweet thread is really aimed at universities. And is assuming that universities still have somewhat of an authority and role of preserving our best version of "the truth" about many things.

Obviously if you don't agree with that, then the thread doesn't have much value to you. But assuming we have some idea that universities should have a role trying to identity the true information, and to preserve / disseminate it, my point is they need to adopt LLM Chat as their medium to do that. Because that's the format everyone will be expecting knowledge to be in, in the very near future. And my guess is "very near future" is far closer than most people expect. I'm thinking less than 5 years.

Which means universities need to start THIS year, and show plausible movement towards having their knowledge in chat form within 3-5 years, if they still expect to be taken seriously in 10 years.

Regression Example


See also :