LLMsAnalyticSynthetic

ThoughtStorms Wiki

LLMs have no distinction between "analytic" and "synthetic" knowledge.

That's it. That's the quote.

LanguageModels don't know when they are deducing, making a valid new synthesis of ideas vs. when they are a BullshitGenerator, speculating about things that have no empirical validity.

I saw this when ChatGPT suggested I used the library "quil-droid" in my ClojureLanguage on Android app. quil-droid doesn't exist. But if there were to be a Clojure library on Android that wrapped the Processing API, it would almost certainly use ClojureQuil. And be called something like "quil-droid". The fact that there isn't such a library available on the repositories doesn't stop Chat telling me about it and how to use it.

When challenged on its non-existence, Chat says sorry for the mistake. The real name is quil-android. Again, another perfectly plausible deduction for what the name would be if the library existed.

But again, no such thing.