(ReadWith) CollectiveIntelligence

Here's the beginnings of book or extended essay I began about a year ago. Still thinking of writing something bigger based on this, but I'm not sure about the national stereotypes. I think I need better labels.

One potential confusion with the term "collective intelligence" is that there are two distinct groups of people who use it to mean two different (if relatable) projects. It may give a useful general impression if we label these the "American school" and the "French school" as long as we don't take these regretable national stereotypes too literally or seriously{{EricBonabeau, for example, is a representative of what I'm calling the "American" school :-)}} Both are concerned with the intelligence of collectives : how groups know and act intelligently, but the emphases are different.

The American school are particularly excited how large swarms of fairly dumb individuals can be organized to work together to produce intelligent behaviour. The paradigm example is the ant colony and the key insight is that the individual units need not be very sophisticated in their own right. They do not need much information about the goals or state of the whole; they can get by with minimal, locally available information. The intelligence of the system is in the links between units, not the units themselves. In fact, in the distribution and organization of the links. The units don't necessarily need to manage complex interactions with each other; simple communication protocols are sufficient. Smart behaviour arises through large scale statistical effects, competition between sub-groups and the magic of "emergence".

It's clear to see how colonies of termites, networks of dumb routers, "smart-mobs" of bloggers, markets of buyers and sellers interested only in price etc. fit this model. We can also see why such systems "scale up". Individual units are cheap to mass-produce (ants, routers) or need only minimal training (bloggers, buyers and sellers). And because the communication protocol is also simple, connecting new units is also fairly straightforward.

Such a notion of collective intelligence fits well with the industrial-age, mass-production ideal of improving productivity through division and deskilling of labour. It starts with a sense of wonder at how simple and stupid the individual units can be while still producing intelligent behaviour. And the project is to understand how this happens, with the eventual aim of trying to make better systems. The "American" C.I. knows how to scale, but wants to be even more intelligent (flexible, productive).

The "French school" is, in contrast, determinedly humanistic. It takes as given the virtues of the experience of being an intelligent individual, richly engaged in solving a task, either alone, or in a small, close-knit group (football team, hunting party, jazz band) and notes that the current attempts to scale-up to larger organizational structures have tended to diminish the experience. Collective organization in the large involves oppressive hierarchies, alienating markets and corrosive politicking. Such large scale institutions are not only less pleasant, but a good deal less smart than the individual human or the small groups she engages with.

Hierarchical organizations (sometimes known in this literature as "Pyramid Intelligence") are notoriously bad information processors. At best, the top of the pyramid is a bottle-neck which can't cope with the flood of incoming information. At worst, no one admits mistakes, bad news never does flow upwards and the top is effectively issuing orders blindly. Markets are good at distributing and tracking certain kinds of information, but only that which is already pre-organized using the conceptual tools of property and prices. Information which isn't organized like this, is hard for markets to explicitly represent and therefore process. And information which can't be packaged in this form (moral or aesthetic knowledge for example) is completely invisible to it. Large scale institutions have lost the capacity possessed by individual humans, of having the bigger picture, of making more imaginative inferences and of caring for many dimensions of the human experience at once.

The aim of the French school of Collective Intelligence then, is to understand better this richer notion of working and thinking together, and then to find technologies and institutions that allow it to scale-up. Scaling-up here is still, ultimately, to get larger groups of people working together on the same project, pooling of knowledge and expertise, access to and co-ordination of the use of distant resources etc. But the means to this end is human augmentation : to allow the individual human wider perception and reach.

In a small-scale group like a football team, each member is able to see and track what other members of the team are doing and so has a perception of the whole. In C.I. jargon, this is known as "holopticism". In larger organizations, such perception is usually replaced by extremely simplified abstractions : the org-chart, the monthly report and statistics, the phone-book of contacts in departments. I can't be watching and understanding what a hundred colleagues are doing in real time.

But maybe we can apply "technology" - in the widest sense that includes new schemes for representing information, new social practices and organizational structures, as well as actual automation - to help the humans broaden their perception and reach and to get a richer and more timely perception of the whole. Information would still need to be selected and summarized but that summary doesn't need to be depersonalized{{To give a fashionable example of how this might be done, think again of a 100 member organization. If 20 members of the organization regularly post short snipets of news about their work, and state of mind, on weblogs; and if some 30 or so members each regularly read 3 or 4 of these, they are likely to have a reasonable daily perception of what's going on in each area. Add a couple of informal exchanges of news around the coffee machine, and small-world effects should ensure that pretty much everyone does track what's happening everywhere. And this summary offers a more humanized "cut" of the data.}}

These two conceptions of Collective Intelligence have different underlying ideologies and different aims. But also a huge overlap of interest and method.

  • Both should have an eye on the topology of social groups and how it affects information flows and differences in power. Therefore both have an interest in the recent explosion of research into the abstract, generalizable properties and behaviour of networks of many kinds.
  • Both can be concerned with the social dynamics of how groups form, how natural structures arise and what internal forces challenge them and threaten to destroy them.
  • Both can be concerned with how knowledge is represented. PierreLevy, a leading thinker of the "French school" has talked of a DynamicIdeography, a new level of communication who's symbols are pretty much computer programs in their own right : rich interactive multi-media spectacles. At the other end of the scale, we might be interested in simplistic information representation. What were the advantages of the Greek alphabet over Egyptian heiroglyphics? How can information be represented "superpositionally" across many weighted links within an artificial neural network or by prices within a market?

I should note a debt here. These two views of collective intelligence bear more than passing resemblance to a distinction between kinds of complex systems due to HerbertSimon. He identified two kinds of complexity : "hardware" and "software". HardwareComplexity resides in the components of a larger system. The image is a machine with a lot of specialized "intelligent" parts. SoftwareComplexity resides in the links between components. The image here is a digital computer which has millions of identical memory cells, but the role, content and behaviour of each cell is determined by the shifting configuration of relationships between these various parts.

Herbert Simon has introduced the phrase "the sciences of the artificial" to describe the general study of complex adaptive systems which exist for a purpose. In doing so, he draws connections between cognitive psychology, computer science, economics, organizational theory and cybernetics. In my understanding, "collective intelligence" is very much a continuation of this project. In some ways it broadens it - "intelligence" is qualitatively different from "adaptive" - and introduces new focuses of attention. But knowledge and examples from those fields can be instructive.

The same issues are important whether the ultimate aim is smarter behaviour from simple units and protocols or better co-ordination between complex units and protocols.

Collective Action

There is another issue which impacts on collective intelligence, which is so important that it needs special attention. This I will call "collective action", and is usually considered in the context of its problems.

You could characterize collective intelligence as being about the issues of co-ordination, sharing and distributing information within the collective, deciding what the collective "believes" given the evidence, and even how new ideas and more abstract representations of problems are generated. Collective intelligence guides the behaviour of the collective once the collective has decided what it wants to do.

But in many collectives there is a second sort of problem. The individual units or agents have their own, conflicting motivations and goals. There is a question of how these differences are to be resolved. In an ant colony, mostly the ants all want the same thing and co-operation is implicit in the genetically hardwired behaviour. (Though conflict occassionally arises when some sterile females rebeliously become fecund, and have to be suppressed by chemical baths; and in BlindNakedMoleRats, wannabe reproducers are bullied by the dominant reproductive couple until the stress disrupts their reproductive cycle.)

In collectives of humans, or any autonomous intelligent units, disagreement is endemic. Disagreement ranges from simple selfishness through to equally altruistic but differing conceptions of what's "good" for the collective or what the collective's goals should be.

Once again, we can see two different general approaches to the problem - and, with even less justification, I will try to lump these into the schools defined earlier.

In the first, "American" approach, differences of agenda are considered to be immutable givens. Appeal is often made to some kind of biological constraint and desire is assumed to spring from instinct and innate drives. Human nature just is selfish.

Resolution to conflicts of interests does not, from this perspective, appear to lie in attempting to align the interests of different agents. Disagreement is viewed as an atomic, given, property of individuals, and managed within the system by negotiation and balance. Examples include :

  • an agent trading away some of its differing goals for some other reward,
  • victory or defeat for each goal by a universaly respected process : often some kind of popular vote.
  • a balance of the differing goals reflecting the balance of opinions; a market may have room for two different products designed to fulfill a need, but the one which is ten times as popular has ten times the market share.

Whatever behaviour we need from collectives, we need to tune the incentives and punishments and the structure of the competition between the various actors to try to achieve a balanced or harmonious whole and to prevent / protect against particular individual units from disrupting the workings of the whole due to their individual agendas.

The main tool for the analysis of conflicts of interest is GameTheory. This represents conflicting agents or parties as simple abstract players who get to choose between one of several "strategies" in a game which is defined in terms of a set of rules. In a game, players receive a score which depends on their own move and that of the other players. It is axiomatic that all players are trying to maximize their score. It is not axiomatic that a game is necessarily competitive. Some games are, others are not. Some classes of games also model asymmetric information where some players have knowledge that others don't.

Games may be studied to try to learn how to "win" them - although "winning" may defined pro-socially, as coming to the agreement which best satisfies everyone - or to understand what combinations and mixtures of strategies are stable ie. are likely not to be disrupted by players innovating new behaviours.

The "French" approach, once again starts with the complexity of the individual human within the system. It pays more attention to the "psychology" of the individuals and accepts that any interaction between people has an almost indefinite number of levels of complexity. Within the small meeting when two or three people come together to negotiate an agreement or decide a course of action or even to share the information they received, all manner of social factors may be influencing the outcome : from race, gender and social class through to the seating arrangement at the table to the aggressiveness of one of the participants.

The desires and goals of the agent are viewed as constructed by the history of the agent within the world and wider social context.

The aim of research is to tease-out and compensate for those factors which might be disrupting the intelligence of such meetings and to heal potential conflicts by bringing each member of the group to a better understanding of the whole range of factors influencing each other. In practice, this often translates into adding some formal recognition and separation of various types of information being presented and egalitarian sharing of roles. For example, separate phases of the meeting are specified for critical comments and for suportive comments, and each member is given space (obliged to, encouraged to or allowed to) contribute both. We hope that this formality and the postitive comments depersonalize and take the sting out of necessary criticism. And that bad feeling does not start to disrupt the efficient running of the group.

Furthermore, every individual is encouraged to go to a higher-level and reflect on the comportment of others and how this affects the overall behaviour of the group, and to contribute this analysis to help other member of the group modify their behaviour.

The French school views disagreement as something to be disolved at the lowest level by the steady application of information and reason. Far from being an axiomatic given, it's the result of a current failure of the system to fully inform every agent of all the relevant facts. Improvements in process, culture and technology, will once again expand the scope and intelligence of groups by reducing the quantities of conflicting interests. That is not to say this school rejects markets or elections, but these institutions are only part of the mix.


These, then, are the tasks facing a collective intelligence.

  • how to organize division of labour / specialization
  • pooling knowledge
  • discovering facts and resolving differences of opinion about them
  • resolving differences of opinion of purpose