Complicated for Twitter. Here's what I think in summary.
Computing is about using language to tell computers to do things. Language enables grammatical composition and ever increasing levels of abstraction and expressivity
The great mistake and delusion in computer history, of which the desktop metaphor is just one major example, is "direct manipulation".
People seem to love it and always fantasize about more of it.
But with DM you switch from finding more and more elegant ways to TELL the computer to do stuff, to just "doing it yourself"
And once there's a DM metaphor for a task, rather than a linguistic instruction, it gets locked-in and evolution grinds to a halt.
Basically every time DM came to a paradigm of computing, the vocabulary froze at that point and at that level of abstraction.
- Desktop metaphor for launching applications and WYSIWYG word-processors looks like the 70s.
- File systems look like the Mac finder from the 80s.
- Spreadsheets started as a promising mix of visual and linguistic, but have DEVOLVED into mere grid-drawing GUIs.
- IDEs haven't even changed their menu layout since the 90s
All progress in computer science depends on maintaining that gap between you telling the computer to do something, and what the computer does.
All the force multipliers live in that gap. Close it and progress stops.
It's exactly my belief in the "gap" between what you type and what it "means" (or evaluates to), which comes from my experiences as a programmer. I say elsewhere it's where the magic happens
If I want to write the list of numbers from 1 to 1000 in WYSIWYG I have to do that. But in a decent SmartAscii that aspires towards a programming languages, I can write something like
(drop 1 (range 1001))
Other ToolsForThought systems I see trying to square the circle by eliminating the gap. But they simply end up reproducing it at a finer granularity. You see this as people type double-square brackets into the editor and things start popping up offering autocompletes or other extra meta-data to add to it.
It's not that there isn't a gap between the "source" and "result" of their text, it's just that they try to cram it all into that tiny pop up moment where there isn't space for a huge amount of "magic".
The worst thing is if, by adding extra information through the GUI at that point, you are adding information that isn't in the canonical plain-text version of the text. Which makes, say copy and pasting the text elsewhere impossible. Or at least "laborious". By accepting the need for a gap between source writing and effect, you allow the use of any text editing / diffing etc. tool to work on the whole information you are working with. You could, for example, use external scripts to automate other processing of your data, knowing that all the relevant information is available. (There wasn't extra added by a GUI somewhere else.)
And most importantly, it's a kind of LateBinding. The "code" in the text only gets evaluated at render time. As late as possible. Rather than expanded like a macro at an earlier write time.
This gives the maximum power and flexibility and expressivity compared to binding the source-text to its meaning earlier.
See also :