Over on http://hanson.gmu.edu/wildideas.html RobinHanson says something I profoundly disagree with.

If even a few of us honestly sought truth, we would not disagree with each other.

On matters of fact or morality, honest rational truth-seekers cannot agree to disagree. Even if highly computationally constrained, they should not be able to anticipate the direction of others' opinions relative to their own. Yet virtually no pair of humans is like this. Thus virtually no humans are truth-seekers, and since most humans think they are truth-seekers, they are self-deceived.

(Full paper : http://hanson.gmu.edu/deceive.pdf))

I'd say that, far from this being the case, the data always underdetermines the theory. There's always room for people who are honest, good-willed, and smart to disagree when presented with the same facts, because coming to general rules, principles and models from the available evidence is a matter of conjecture. And ConjectureIsBlind in the sense that there's no right way to do it.

Hanson's position depends on the assumption, OTOH, that conjecture is some kind of Bayesian induction. On this model, disagreement based on the same data can only be due to differences in the priors. Hence Hanson makes this slide : Are typical human disagreements rational? Unfortunately, to answer this question, we'd have to settle this controversial question of which prior differences are rational. So in this paper we consider an easier question : are typical human disagreements honest?

That's a hell of a shift. Is "honesty" easier to pin down than "rationality"? How Hanson defines "honesty" is as violating the rationality standards they hold up for others. Put that way, this is a pretty easy target, because no-one except the CriticalRationalist has a viable notion of rationality which isn't self-inconsistent. But this problem is very far away from the everyday notion of "dishonesty" which seems to imply deliberate malice, desire to mislead or other morally objectionable qualities.

In fact Hanson's position is likely to take you in a moralizing direction. Start with the idea that people disagree because they're dishonest, and you'll pretty soon being accusing everyone who disagrees with you of being dishonest. However he couches it, it's a recipe for deteriation into ad hominem attack.

Nevertheless, an interesting paper.

I wonder how close this is to the kinds of arguments that a market will always converge? This fits general discussions about AgoricSystems etc. But in reality do markets ever converge? Because they aren't closed systems, but are always open to and being perturbed by new information, new technologies, new resources, political events, earthquakes, monsoons and other disasters, mass hysteria, finals of the world cup, flu and psychological events. So they may be processing and aggregating the information, but they never get a chance to hit any stable PointAttractors.

In the same way, Hanson's is a proof that given a set of clearly defined, given factoids and probabilities, then we might get convergence through discussion and negotiation. But he airily dimisses the idea that this system is open when he says : Thus John and Mary need not be absolutely sure that they are both honest, that they heard each other correctly, or that they interpret language the same way . (My emphasis on language)

Is Hanson sliding over a major question when he dismisses the language problem? If John thinks that "Ford" is the right name for a Chevy then it's not clear that they'll ever converge unless they figure out that this is the mistake that's being made? Unrealistic example? If you want to take these results to a political domain you are absolutely going to encounter arguments between people who have subtly different, fiendishly complex understandings of words like "rights", "justice", "freedom", "individual", "person", "decision" etc.

What kind of claim is Hanson's? Is it an empirical claim, that people are or approximate Bayesian inferers? If so, and if they don't give up disagreement, then isn't that claim just wrong?

Or is it a normative claim that people a) should try to act like Bayesian inferers and b) in order to do so, they need to cure themselves of this smug agreement to disagree?

It fact, it's the confusion over this question that leads to the touchingly plaintive crie de cour as Hanson tries to understand why people don't accept that they shouldn't agree to disagree :

Another possibility is that most people simply do not understand the theory of disagreement. The arguments summarized above are complex in various ways, after all, and recently elaborated. If this is the problem, then just spreading the word about the theory of disagreement should eliminate most disagreement. One would then predict a radical change in the character of human discourse in the coming decades.

Happiness! But saddly ...

The reactions so far of people who have learned about the theory of disagreement, however, do not lend much support to this scenario.

Refusing to address the possibility that people continue to disagree because they aren't Bayesian, and unable to tackle the issue of whether their priors are rational (the "mad" option), Hanson ends by lumping for the "bad" option. People refuse to agree because they're dishonest.

See also :

  • ValueOfArgument (and as noted in that discussion, I do agree with the spirit of the Hanson paper, that we shouldn't be satisfied with an agreement to disagree. There is potential energy there for learning more, and if we're smart, we'll find a way to tap it. But agreement has an equivalent problem.)