So this is in retrospect one of the most insightful and prescient jokes made on Twitter in recent years. (Not sure if this is the original instance of it or not)
Certainly the best joke about LanguageModels in 2022
Look. I am a Wordcel and I am not at all happy about the derogatory nature of the ShapeRotatorAndWordcel discourse.
Nor do I accept that logic and rigour can't be packed into text and symbol manipulation. (That's what maths notation is). Or that, ultimately, the geometrical thinking involved in shape-rotation can be anything like as powerfully expressive as text. I am TextPeople through and through. And it's very clear that pictures can bullshit us as badly as words. That's what OpticalIllusions and GraphicDesign are about, after all.
However. As the BullshitGenerator nature of LanguageModels becomes more apparent (eg see ChatGPTOnThoughtStorms), we need to take a step back and ask, if there IS to be a principled distinction between "bullshit" of the kind that a language model can spit out, and real "verbal reasoning", how can we characterize / measure it?
How do know when verbal reasoning is spiralling off into bullshit vs. doing valid work?
More on LLMsAnalyticSynthetic
In one sense, TheWorldBelongsToStoryTellers speaks to the power that words still have.
Backlinks (1 items)