TomCoates asked a question (http://www.plasticbag.org/archives/2003/03/value_judgements_on_two_kinds_of_networks.shtml) which I've been thinking about for some time.
Surely I believe that networks create the greatest productivity with the least central regulation and control.
And, surely, being a socialist implies I believe in government control of the market.
Aren't these two positions incompatible?
Here's the first draft of an answer I posted to Tom's weblog :
On the internet we only care about the overall, statistic properties of the system. If a packet gets lost, no problem, send it again. Packets are dispensible. The global economy is different because we care that the individuals have rights and dignities which we must respect. If a kid goes hungry, that may be a good thing from an overall perspective. But we have a moral obligation to treat this as a failure of the system.
OK, that's a good answer. People are important. We have a moral duty to people. Central regulation may make the overall network less efficient, but it helps protect individuals who would otherwise be destroyed by capitalism. The people who don't generate enough value to earn money from the network, but who still need to eat.
But it answers the political question by raising moral ones.
Because it accepts that the overall efficiency of the network is compromised. And, in the long run, after we're dead, others may suffer for the accumulated inefficiencies. If we were utilitarian we might feel that this was short termism, looking after a few people now but hurting untold numbers in the future. (It's a problem made concrete every time a government choses to subsidize an unprofitable industry to save jobs.)
In other words, a reverse of the folk idea of political positions. The left look after the few and the present; while the right, determined for the system to be as efficient as possible in the long run, turn out to be the defenders of the many.
Keynes answered this by stating in the long run, we're dead. We have to be more concerned with a here and now, rather than an idealized point at some indefinite point in the future. But how far ahead should we be responsible?
It's another problem of QuantitativeEthics. To what extent can we balance our obligations to the few and the many, the present and the future?
- appeal once again to the aliasing effect of discrete reality. Maybe the inefficiency isn't cumulative. We steal efficiency from the system now, but it's only value which would be lost in the rounding errors of the generations anyway.
- response: maybe that's true. Or maybe it's exponential - WingsOfAButterfly, etc. –BillSeitz
- perhaps we just do owe more obligations to the already existing, here and now, than to the potential ... otherwise contraception would be murder etc... (See more on PotentialPeople)
Alternatively the left must argue the case for centralization increasing efficiency. But then the same arguments can be made for centralization of control in the network. For rational design rather than organic growth.
I'm now looking into this in the Optimaes project.
The worst scenario
The worst possible case is if both the left and right are correct. The left that capitalism is corrosive, exploitative and consumes many of us in the present. And the right that this system is necessarily necessary to leave a better world to future generations. In such circumstances we know that we owe our own fortune to generations who were consumed before us, and we are obligated to sacrifice ourselves in the same way.
Organic growth contra rational design is a FalseDichotomy.
We need to design things to grow. We need to design rules that would let the system grow organicly. The internet mail and Instant Messaging is an example why. Email works perfectly (if only we forget about spam - but that is not different from IM) while IM is devided between AOL, MSN etc. The reason is that email was designed (by a democratic process) while IM was left for power play by corporations.
This is exactly the same thing we have in software. There are methodologies where everything should be designed upfront, and we have methodologies where everything is just added to the system when there is a need. But why not do both?
BillSeitz:ScaleVsConsolidation raises some interesting thoughts. My response :
I think the really interesting question here is why some institutions scale and others don't and what the ones that don't can learn from the ones that do. For example, Bill complains about corporations / government / religions etc. but what about markets? Aren't these big institutions? If markets scale in a way governments don't, why is that? Is it something to do with less "dependency" between the parts (cleaner module separation / abstraction?)? Is it that both sides are free(ish) to walk away from a deal, whereas in a hierarchy subordinates can't walk away from superiors, even when they're getting bad instructions?
BillSeitz reply: while I understand your point that MarketsAreEmbedded, I'm not sure that makes a market an institution. Or, perhaps a single market (e.g. the NYSE) is an institution which can be managed in such a way as to introduce MarketDistortion-s, but the MarketEconomy as a whole is not an institution in the sense that it is CentrallyManaged (hmm, is that the definition of an Institution?).
See also :
- Have the right won TheEfficiencyQuestion?
- There still needs to be a DecentralizedLeft
- These comments to DanielDavies for http://pro.enetation.co.uk/comments.php?user=dsquared&commentid=81786856&usersite=http://d-squareddigest.blogspot.com/20020915d-squareddigestarchive.html#81786856 for a discussion of moral obligations and discounting the future.
- LawrenceLessig on "burdening" the future : http://www.wired.com/wired/archive/12.10/view.html?pg=5
Backlinks (24 items)