GameTheoryAndTheSocialContract

ThoughtStorms Wiki

Context : GameTheory

I started reading this mammoth book by KenBinmore over the christmas holiday. It is in two volumes, and I've only read the first one so far. (I'm waiting for the other one to turn up through interlibrary loan.) Also, I've followed the author's own advice and merely skimmed through most of it, so there's quite a lot I want to go back and read in greater depth.

All the same, I've no hesitation in exhorting Phil and anyone else here to go out and read it. Whether or not Binmore's theory convinces, he covers so much material and provokes so many interesting thoughts that it's worthwhile.

Here are some thoughts on what I've read so far, which I'll update as I read more and understand it better.

DariusSokolov

The Game of life

This is his general term for the ‘game’ humans play in social interactions.

Binmore does not formally set out the game of life as in a game theory textbook. The game of life is far too complex - it is not a ‘toy game’ like the prisoners’ dilemma.

The game of life does have rules, solutions and all the usual trappings of game theory. They are not to be specified in detail, but general observations are made:

The players are human beings who can be modelled as specimens of homo economicus.

The rules of the game are ‘unalterable physical and psychological constraints’. They are ‘natural laws’.

Equilibria in the game of life are called ‘social contracts.’

There are multiple equilibria. One way of selecting an equilibrium is for players to bargain and reach the Nash Bargaining Solution. But Binmore doesn’t think this is what humans generally do. The usual way to select an equilibrium is by following additional rules - conventions.

Some of these conventions are to do with what we think of as ‘fairness’.

Game of morals

Another way of selecting an equilibrium in the game of life would be to (imagine that we) play an additional game called the game of morals.

The rules of the game of morals are not natural laws. They have been invented by Ken Binmore, standing on the shoulders of Rawls and Harsanyi.

An equilibrium in the game of morals is a social contract that players would negotiate behind the ‘veil of ignorance’, where players are ignorant of their positions in society.

In Binmore’s theory, unlike Rawls’, players can go back behind the veil of ignorance and renegotiate at any point – when ever any one player feels unjustly treated. The ‘original position’ for negotiations is simply ‘the status quo’ at any given point.

Binmore defines a ‘fair social contract’ as an equilibrium in the game of morals. This means: no player has an incentive to go back behind the veil of ignorance and renegotiate.

An equilibrium in the game of morals must also be an equilibrium in the game of life. The set of equilibria of the game of morals is a subset of the set of equilibriums of the game of life.

(Does the game of morals uniquely select an equilibrium?)

Naturalism

Binmore asks: ‘why [should] attention be paid to my game of morals rather than to one of the many other moral systems that have been proposed’?

He believes that the conventions human beings use to select fair outcomes in the game of life ‘already incorporate simple versions’ of the device of playing the game of morals.

Conventions are themselves products of games of life. Existing fairness conventions are themselves part of our ‘current social contract’.

Shifting to a new social contract in which we used the device of the game of morals instead of our inherited conventions would be a feasible equilibrium of our game of life. And we would all prefer it to the present arrangement, says Binmore.

Some other moral systems, particularly those Binmore labels ‘Kantian’, tell us to select social contracts which are not feasible according to the rules of the game of life. There is no point in advocating such moral systems, as players in the game of life will never adopt them.

Might there also be moral systems which are feasible but are not in tune with existing fairness conventions? Binmore doesn’t mention this possibility explicitly.

But he certainly thinks that it is an advantage of his proposal that it is close to existing conventions – this will make it easier to convince people to adopt it.

‘… the tide of history has washed this social tool up onto our beach. Why not therefore use it to improve our lives …’?

Social Contracts

Binmore is using the term ‘social contract’ a lot more loosely than many ‘social contract theorists’. In Binmore’s usage, no actual contract ever needs to be signed, and his theory doesn’t rest on the idea that we are bound by any contractual agreement.

Rawls De-Kanted

Binmore’s game of morals is based on Rawls story of the original position, with important changes:

1. Original Position = Status Quo =

In old-style ‘Hobbesian’ social contract theories, we start with a ‘state of nature’. Players then have to negotiate a social contract that takes them out of the state of nature and into civilisation. As many have pointed out, including Hume, this ‘state of nature’ is no more than a ‘philosophical fiction’.

Instead of a ‘state of nature’, Rawls has the ‘original position’. The original position has no pretence to being anything but a philosophical device. Rawls views it as ‘a procedural interpretation of Kant’s conception of autonomy and the categorical imperative.’

In Rawls’story, we imagine we can go behind a ‘veil of ignorance’ so that we have no idea of our positions in society. We can then suppose that social advantages will be allocated in some kind of lottery. Whilst in ignorance, we can negotiate a social contract that will be in force once the lottery is drawn.

Rawls’ original position is ahistorical. When we go behind the veil of ignorance we don’t just forget our own positions within society. We also forget all about the existing structure of our society, its conventions, laws, institutions etc. The new social contract we come out with will start from scratch.

This is one heavily criticised aspect of Rawls’ theory. Particularly as the type of social contract that Rawls in fact recommends is an improved version of a twentieth century liberal democracy.

Binmore’s favoured social contracts would also look like bourgeois liberal democracies. He can be unapologetic about this, however, as his move is to simply identify original position = the status quo. In this version of the veil of ignorance we strip off our knowledge of our individual roles in society, but not of the social structure as a whole.

Rawls’ and Binmore’s models are set up to answer two different questions.

Rawls: what is the ideal just society?

Binmore: what improvements (reforms) are possible to the existing society?

2. No non-credible commitments

Rawls’ atemporal social contract cannot be renegotiated. Players make commitments to abide by the agreed social contract once they have cast off the veil of ignorance and gone out into the real world.

This leads to maybe the biggest problem with Rawls’ theory. What happens if players find it is not in their interests to stick to the agreed contract once the lottery is drawn?

In Binmore’s model a fair social contract must be a feasible outcome of the game of life. That means no player will have an incentive to deviate from the contract agreed in the game of morals.

In Binmore’s model the game of morals can be constantly replayed. If, as the game is played, circumstances arise to make the existing social contract unfeasible, then the game of morals can be replayed.

Binmore devotes a lot of space to refuting arguments that an unfeasible social contract can be maintained through commitments (constrained maximisation), or the ‘symmetry fallacy’.

3. Expected Utility Maximisation

Like John Harsanyi, Binmore drops Rawls’ adherence to the maximin criterion in favour of the ‘orthodox’ use of the Von Neumann – Morgenstern (VNM) theory of expected utility maximisation.

In section 4.6 he takes to task Rawls’ three justifications for using the maximin principle.

The first reason, that ‘the alternatives rejected by the maximin criterion are “outcomes that one can hardly accept”’, Binmore thinks is ‘almost frivolous’.

The second argument is, as he reads it, that players in the original position are highly risk-adverse. But we do not need maximin to capture this, as risk-aversion is adequately modelled within VNM utility functions.

The third argument is that in the original position we have so few data that we cannot work out the subjective probability distributions needed to do expected utility maximisation. Binmore argues that this would not be the case behind his ‘thinner veil of ignorance’ where players retain their cultural and social knowledge.

However, Binmore says dropping maximin will not lead him to a ‘utilitarian’ social contract (some variant of the ‘greatest happinness of the greatest number’ ideal that sums individual’s utilities without worrying about distribution.) In fact his modelling of the bargaining in the game of morals will lead to a more egalitarian outcome similar to Rawls’.

The details of this are left to volume two.

4. Interpersonal utility comparisons

I know I will end up in one of the possible outcome positions allowable under the agreed social contract. I just don’t know which one. Fairness in a social contract must involve some kind of comparison of the preferability of these different possible outcomes.

For example, if our notion of fairness has something to do with a notion of equity, then we will ask how equal the possible outcomes are.

Rawls’ maximin criterion is not immune from the need for a standard to compare different outcomes.

Rawls’ argues that there are certain ‘primary goods’ including: ‘the powers and prerogatives of office’; ‘the social basis of self-respect’; ‘income and wealth’. All aspects of a person’s outcome in the social lottery can be reduced to an index of these primary goods.

Primary goods are supposed to be directly comparable. This is another hole in Rawls’ theory. It is questionable that even the ‘most mundane of the primary goods, namely income and wealth’ can be treated in this way. In the present society, there are markets for most economic commodities. In the absence of market failures, the prices of tradeable commodities could thus be reduced to a numeraire price for one ‘primary good’ eg. US dollars. But markets are social institutions. We cannot import market mechanisms a priori into the original position.

With John Harsanyi again, Binmore returns to decision theoretic orthodoxy and compares players’ utilities.

A good part of vol 1 is devoted to explaining orthodox utility theory and its foundations in the theory of revealed preference. Binmore is at pains to emphasise that modern utility theorists do not believe in a primary good measured in ‘hedons’ that lies behind utility functions. Utility functions do not cause behaviour, they are simply devices for representing the behaviour of entities who display consistent preferences.

The problem is that orthodox utility theory has no mechanism for comparing individuals’ utilities. Orthodox theory does allow us to assign cardinal values to the utilities individuals ‘get’ from different outcomes. But these ‘utils’ are purely nominal values. They represent levels of intensity of an individual’s preference for different outcomes. But the scale used to measure intensity is not fixed, just as we can switch between fahrenheit and celsius scales to measure temperature. There is therefore no sense in comparing arbitrary values of the utility functions of different individuals.

Binmore’s proposed solution is a theory of 'empathetic preferences'.

Empathetic preferences

This is where it gets interesting.

Binmore distinguishes empathy from sympathy, clarifying the language of Hume and Smith.

In sympathy, I ‘identify so strongly’ with another that I take her interests as my own. This can be modelled quite simply by incorporating ‘altruistic preferences’ in my utility fiunction. I derive utility not only from my own consumption but also from that of others.

Binmore thinks that sympathetic preferences are common within families and other small close groups. But they are not useful in modelling the large-scale social interactions he is talking about where ‘most people are strangers to each other.’

Empathy means ‘putting yourself into the shoes’ of another, assessing the game options from their position. A good loan shark should practice empathy ‘with a view to predicting how [victims] will respond to his overtures.’ More generally, players in games use empathy to model other players’ preferences and thus identify their best moves.

‘Without it, we would be unable to find our way to equilibria in the games we play except by slow and clumsy trial-and-error methods.’

Unlike sympathy, empathy does not effect a player’s own ‘personal’ utility function. Binmore models empathy by supposing that when we put ourselves in another’s shoes, we can imagine ourselves making choices with a distinct set of ‘empathetic’ preferences. These preferences are assumed to satisfy the same consistency requirements as my personal preferences. I thus have a personal VNM utility function when making my own choices, and a separate empathetic VNM utility function I use when imagining myself as someone else.

In Binmore’s model of the game of life there are just two players, Adam and Eve. There is a set C of possible lottery outcomes and the set {A,E} consisting of Adam and Eve.

Then

C x {A,E}

is the set of all pairs (c,j) with c in C and j in {A,E}.

Each player has a personal utility function ui = ui (C) and an empathetic utility function

vi (c, j).

My empathetic function allows me to compare preferences across personas. That is, I can make comparisons like:

vA (X,A) > vA (Y,E).

This says: Adam would rather be Adam with outcome X than Eve with outcome Y.

In the original position I don’t know if I will be Adam or Eve. Eg. I have to assess a social contract which gives me an outcome X in either role, and I can assign a probability of half to being either Adam or Eve. My expected utility is then:

Wi (X) = ½ (vi (X,A) + vi (X,E) )

With a few further assumptions, Binmore is able to derive a constant exhange rate

Ui/Vi

for comparing Adam’s and Eve’s utilities. That is

Vi uA = Ui uE

or

uE = Vi/Ui uA.

I need to spend some more time studying these assumptions.

The key assumption in this seems to amount to my being able to empathise perfectly with other players - but maybe I've missed something.

Empathy Equilibrium

The problem remains, however: the key ratio Vi/Ui is idiosyncratic to player i. I can to decide how I would swap utils with other players. But the rate at which I will swap may not be the same as the rate another player will use.

In order for the game of morals to work we all need to have the same utility exchange rate

V/U = Vi/Ui = Vj/Uj.

Binmore agrees with Harsanyi that ‘In actuality, interpersonal utility comparisons between persons of similar cultural background, social status, and personality are likely to show a high degree of interobserver validity.’

In volume two where Binmore will go further and argue that social evolution will lead to an ‘empathetic equilibrium’ in which we will all have the same empathetic preferences, and thus a common rate V/U.

In volume one we get just a brief overview of his ideas on how preferences – both personal and empathetic – evolve.

Evolving Preferences

Binmore’s broad observations in volume one borrow from the theory of the firm. Economists talk of three time spans which are defined by how firms can alter their production costs. In the short run, costs cannot be altered at all. In the medium run, they can alter variable but nor fixed costs. In the long run they can alter all costs.

In the short run, players’ preferences are fixed. In the medium run, empathetic preferences can change. In the long run personal and empathetic preferences can change.

The game of life and the game of morals are played in the short run. In the medium run, ‘the forces of social evolution, which I see operating primarily through imitation and education’ can shift empathetic preferences. As players repeatedly play games of morals, and watch each other playing them, they learn/imitate ‘behaviour patterns’ or ‘memes’. These memes are the bases of what we describe as empathetic preferences. The memes that survive result in empathy equilibria.

In an empathy equilibrium, no player in the game of morals would try and deceive the others into believing he has different empathetic preferences.

Binmore makes an important point here that I don't quite follow yet.

If players picked an equilibrium in the game of life by doing pure self-interested bargaining rather than following conventions or playing the game of morals, then the equilibrium would be the Nash Bargaining Solution. This will be shown in vol 2. The Nash Bargaining Solution is the maximum point of the function:

wN (x,q) = (xA – qA) (xE – qE)

Here xi is the utility outcome for i after the deal is struck; qi is i’s utility in the status quo, or what he will keep if no deal can be made. The Nash bargaining solution does not require interpersonal utility comparison: it is the outcome of purely self-interested bargaining.

When players play the game of morals, it will be argued in vol 2, players also get to the Nash bargaining solution - but with empathetic instead of personal preferences.

In an empathy equilibrium we have a common interpersonal comparison rate, which allows us to rewrite empathetic utilities in terms of personal ones. In an empathy equilibrium that is ‘adapted’ to the game played over (X,q), the ratio V/U gives a solution to the game of morals that is the same as the non-empathetic Nash Bargaining Solution. I think I get the idea behind this, but I couldn't prove it to you.

As Binmore says, ‘the operation of social evolution in the medium run therefore leaches out all the moral content built into the game of morals.’

However, this is not the situation we are usually in. Usually our empathetic preferences are behind the times (i.e., out of equilibrium?). We are meant to think of a new social contract negotiation occurring when a ‘new technological opportunity arises’.

So the story is something like: a new situation arises, eg. a technological advance, that requires a renegotiation of the social contract. But our 'behaviour patterns' aren't adapted (I'm not really sure what that means) to the new situation. So if we play the game of morals, we do so with antiquated empathetic preferences, which leads to a 'moral' solution rather than rational self-interested bargaining. Morality is a symptom of sluggish social evolution.

As for what Binmore says about the long run, I'll leave making any commment at all until I've read vol 2.

Whiggery

Binmore describes himself as a 'bourgeois liberal' and a 'reformist'. His theory is used to support a gradualist position in which 'The Left' is blamed for proposing non-feasible social contracts. And when human beings chase after the impossible, as you can imagine, terrible things happen.

‘The problem for a reformist [is] that of seeking a new social contract to which society can be shifted by mutual consent.’

I will continue this discussion here: OnWhiggery.

DariusSokolov

This is fantastic stuff Darius. Too bad interlibrary loan doesn't reach out here :-( More thoughts as I digest it (particularly the maths)

PhilJones

It would be great if you could get hold of a copy Phil as I would like to have a good discussion with you about this. And I wouldn't want you to take my regurgitation for Binmore himself - I have more much digesting to do myself.

Another researcher working in this field is [Bettina Rokenbach - I've just started reading her papers and it is very interesting stuff. – ZbigniewLukasiak

Cool, thanks – PhilJones