DoesAbstractionScale
ThoughtStorms Wiki
Thinking about Wiki. Particularly WikiAsUltimateUserConfigurableApplication
One of the things that's nice about wiki is that you can build it incrementally. And one of the interesting things about this is the move from ConcreteToAbstract in the CategoryCategory convention. At first you just start adding pages. But at some point you notice that several pages belong to a category. So you invent a category, and simply add a CategoryCategoryName tag at the botttom of the page. This immedietely creates a place to describe the category and the possibility of searching for all things in it. We've moved from concrete to abstract very simply. Other systems make you design your abstract categories separately, then fill them in. This is a hierarchical top-down approach as opposed to wiki's bottom up organic approach.
Thinking how easy this is ... and thinking how UsersFindAbstractionHard, I started wondering about the assumptions in computer science.
We understand that abstract thinking is good. If we design something in the abstract, it is more flexible and reapplicable. And on the small scale we see this all the time.
And yet, software is hard. People like AlanKay (LateBinding) and JaronLanier (PhenotropicProgramming) think that we have failed to build large software projects the way we succeed in large engineering and architectural projects.
So I started wondering. Perhaps this is because scaling up abstract thinking is hard. We try to build something big and complex. And yet we try to make it light enough to float away from it's moorings in the world. The web isn't like that. The web, as originally built is full of hard coded references to other pages (which break). It's full of information about the real world. Real world addresses of companies and individuals. (SituatedSoftware)
And then I started thinking about the problems I described in SemanticsOfProgrammingEnvironments and I wondered : What if we've got everything wrong? We think that concrete based things are harder to scale up, to maintain, even though naive users find them easier to build. And we think this because we have some definite examples of where if we make the right abstractions, build the right future proofing, certain kinds of adaptations and maintainence are easy. But every time we plan an abstraction layer which makes one kind of adaption easy, we make another kind of (unimagined, unplanned) adaption more difficult. (We build an API to abstract away from a database, but perhaps we make our system more dependent on a substrate that can provide that API ... and lose flexibility to port on top of a hierarchical filing system or take adavantage of extra services provided by one particular database)
We're taught to recognise certain oportunities for abstraction. We build catalogues of patterns where we can lookup precident. (SoftwareDevelopmentIsBecomingLikeLaw)
We see the mess that gets made by concrete systems that can't be adapted.
And yet ...
As we build larger and larger systems, we expect to keep the same virtues. But in fact we are faced continuously with the temptation to make quick and dirty hacks. To apply the concrete rather than the abstract. To appeal to services of this operating system. To assume this service sticks at that URI. To assume that the architecture will use this word length or that byte order.
We learn to avoid temptation. To be virtuous. To build another bridge of abstraction over the abyss of the concrete. And so our software becomes heavier and heavier. Each avoidance of temptation adds more complexity, more code. But more code and more complexity brings it's own dangers. Our edifice is massive. And unweildy. Flexible in all the ways predicted. Unflexible in many other ways because we've commited so much code to doing a certain thing this way.
Architects and builders don't have the luxury of avoiding the concrete. Each building is embedded in the real world. Each is constrained by it's site boundaries. By the services which feed it in the environment. And yet buildings get built, on time, to order. And they adapt over time to continue to suit their users.
Our largest scale engineering projects are succesful despite the lack of abstraction or it's alleged virtues
So what if abstraction doesn't scale? If we make things harder for ourselves by trying to be purer? What if we embraced the temptation of the concrete?
Which brings us back to what we can learn from wiki. If software was like Wiki, was user configurable, full of user quirks, concrete assumptions, embedded in it's environment. Yet fluid enough to change because concrete adaption was made easy (RefactoringBrowser). Perhaps we could actually build larger systems more easily. Abstract thinking has it's place. But maybe we are too seduced by it. And put too much faith in it's ability to scale up.
See also :
Could this have anything to do with the struggle between centralization and decentralization? Maybe abstract thinking requires more planning and co-ordination at the centre. While concrete thinking, despite it's faults, is more easily distributed among a lot of less smart or co-ordinated builders. (GenerateAndTestInParallel)
SubText is partly motivated by retreating from abstraction towards the more concrete notion of Copy and Paste. In fact a subsection of JonathanEdwards' manifesto (is http://alarmingdevelopment.org/index.php?p=5) is) Concrete is better than Abstract !!!
Of course, there's nothing new under the sun. Just discovered BigBallOfMud
Counter-args
Parallelism
One difference between traditional software design and either architecture or even the internet is that traditional software has to build systems which efficiently use the scarce, bottleneck resource of the computer processor and memory. Organic, piecemeal development and a principle of "leave stuff concrete to make abstract later" are based on the asusmption that there are resources available to instantiate those doomed rivals or temporary concrete zones.
You need spare capacity / redundancy for this to work. But if you don't have it, then maybe you need design. (GenerateAndTestInParallel)
Technical Debt
Not trying abstraction leads to TechnicalDebt
**Q : How does this squarer with the Pragmatic Programming discussion here : http://www.artima.com/intv/metadata2.html?'''
A : I think that what they say is right and commonsensical to a certain extent. This isn't meant to be an argument against the obvious, uncontroversial wins through abstracting out certain parameters.
But there's a set of assumptions behind the suggestion of moving things to config files :
- that finding data and working with it in a config file is easier than finding it in code
- that if data is in a config file it is more likely to be an example of "Once and Once Only" than data in code
- that changing the data in a config file is quicker because it doesn't involve recompiling
However, there are cases where these assumptions break down. I hate to bring up my bete noir : Struts yet again, but ... if you stuff a large number of unrelated parameters into a single XLM config file, which lives in it's own obscure branch of the directory structure, and is going to be edited by hand using a non-XML editor like Emacs, then finding and changing the parameters is not really easier than finding them placed as static variables in the appropriate classes. And if, you need to do a server restart to reload the parameter file when a parameter is changed then this isn't much different from doing a recompile to change it either.
Debate
: Which brings us back to what we can learn from wiki. If software was like Wiki, was user configurable, full of user quirks, concrete assumptions, embedded in it's environment. Yet fluid enough to change because concrete adaption was made easy (RefactoringWiki). Perhaps we could actually build larger systems more easily. Abstract thinking has it's place. But maybe we are too seduced by it. And put too much faith in it's ability to scale up.
This is the thrust behind much of the agile movement in software development. Move the design from being an a large up-front task to being an incremental task. Develop structures and methodologies to reduce the cost of change, rather than try to plan ahead in an attempt to avoid change. Being an ExtremeProgramming convert I find it works very well.
Actually, I'm starting to realize this. I knew they were thinking in the same way. But I used to think I was some kind of weird, crazy extremist. Now I realize they really are saying similar things. (Guess I should have taken the name seriously ;-) – PhilJones
Manageability asks "Is over-abstraction Java's achilles heel?" : (and http://www.manageability.org/blog/stuff/over-abstraction-java-achilles-heel) and) comes up with an answer I didn't expect.
See also :
- primary : OnAbstraction, ProgrammingWithAndInWiki, SituatedSoftware, OnReuse,
- tertiary : SelfLanguage, SeedWorks
Backlinks (31 items)
- AbstractionsVsConcretions
- BigBallOfMud
- ConcretePageNames
- ConcreteToAbstract
- CounterThinking
- DocumentsVsObjects
- GranularityMistake
- ImageBasedComputing
- LateBinding
- ObjectOrientedProgramming
- OnAbstraction
- OnReuse
- PainOfProgramming
- PatternLanguageForTheSocialNetwork
- PhenotropicProgramming
- ProblemsWithInheritance
- ProgrammingStuff
- PrototypeBasedLanguages
- SemanticsOfProgrammingEnvironments
- ServiceOrientedArchitecture
- SituatedSoftware
- SoftwareLessonsFromHowBuildingsLearn
- SoftwareStackSoftwareNetwork
- SoftwareSupplyChain
- StartingPoints
- TailWind
- TechnicalDebt
- TheCityAsInformationSystem
- UsersFindAbstractionHard
- WhatsEssentialInWiki
- WorseIsBetter