ThoughtStorms Wiki

I originally submitted this to UK Wired's "Idees Fortes" column about 1995. There was some interest from the editor through a friend of a friend. However he wanted a change in emphasis that I wasn't very keen on, and the whole thing got put aside. Then, suddenly, UK Wired was no more. So I might as well repeat the original here.

Today I note the parallels with AlgorithmicDespotism

Tomorrow, or the day after, or some day soon, you get a letter through the door inviting you to your local hospital's genetic screening program. Your DNA is analysed to discover whether you have a tendency towards inherited disease. The next day, you get a letter from your health insurance company, demanding that they see the results of the analysis.

There is an obvious conflict of interest. If you discover that you are likely to develop such a disease, you will want to find the best cover you can. The insurers, conversely, want to pay as little as possible and will increase your premium or even reject you if they discover this probability. But keep your genetic destiny secret and you are, in their eyes, trying to "defraud" them.

It's easy to see how this conflict has arisen, but whose problem is it? Is it a problem of medical ethics, or privacy, or copyright of genetic material? These are, undoubtedly, the domains where the problem will be argued. But I suspect the issue is merely a symptom of a wider challenge to insurance, due to a diminution of uncertainty.

Health insurance is fundamentally a gamble. You bet the insurers that you will get ill; they bet you stay well. The payout is the cost of treatment. Good health-care is expensive, and you are going to want a high return on your bet. But like any other gamble, as the odds shorten, the payout becomes smaller relative to the stake. You want long odds, which requires that there be a significant uncertainty about the outcome. New technologies and accurate information can destroy this uncertainty.

Insurance is not alone. Other enterprises which rely on uncertainty such as gaming and share trading, are also threatened by too much, too accurate information; and the use of specialised, insider knowledge is considered detrimental and even unethical in these domains. But information wants to be free, and information technology enables it to be so. Can the enterprises which rely on uncertainty survive in an age of more and more reliably predictive information?

It can be argued that there are insurance companies willing to take on high risk clients. That may be so in some areas. But for health and improved genetic analysis, the odds are potentially orders of magnitude less than those governing, for example, motor insurance. As the companies gain more knowledge about their customers, they will classify them within smaller and smaller groups; make better predictions; exclude coverage for likely eventualities and offer cheaper policies on unlikely ones. Health insurance will become expensive for those who require it the most, cheap for those who don't. In pursuit of this extra efficiency, insurers may very well devalue their product and thus damage their own industry. Yet if they don't, they fear the customer will use a similar knowledge advantage against them. This is not just a story of company greed.

Insurance is important socially, because it is pretty much the only way we now have of sharing misfortune, and redistributing wealth. Those with medium and high incomes are increasingly reluctant to be taxed for the benefit of the poor, but can still be frightened into sharing resources through fear of an uncertain future. Britain's welfare state is nominally a "national insurance" scheme. But there is continuous pressure towards private, voluntary measures.

As the nation state dwindles, insurance may become the only glue that holds 21st century society together. But can privacy legislation or encryption technology protect the uncertainty that is necessary for insurance to function? Or must we start looking for a new political mechanism for sharing misfortune?

I can see that people who get flagged by tests with high predictive value (hmm, what chance rating would trigger this? 50%?) might find insurance to be onerously expensive. (This becomes SelfInsurance.) But I'm skeptical as to what % of the population will ever get flagged by any tests with sufficiently high predictive value.... –BillSeitz

: Bill, have a look at the first DanielDavies piece (which I just refactored to OnRisk )where he explains, more technically than I really understand, how some extra knowledge starts to destroy the insurance market. Of course if all this genetic screening just doesn't work there's no problem. But if people thought that, I guess they wouldn't be researching into it. – PhilJones

Could you use insurance for this kind of crisis? It's nice in theory, and maybe for a few high-profile, feel-good early experiments. But once the insurers set their actuarial scientists to sit down and do some serious analysis, the premiums will go way up. (Although maybe still cheaper than current aid because of the delays mentioned in the story.)

Also, aren't some of the disasters we're talking about, actually the kind of "acts of God" that insurers wouldn't normally touch?

Contrast :

Perhaps we'll have PeerToPeer insurance based on BlockChains :

The real reason insurers won't cover people who are sick.

  • tl;dr : They know too much (more than the insurance company) about their own willingness to fight the disease with expensive treatment.


See also :