We all tend to confuse probability with uncertainty. We shouldn’t.
Faced with a decision between option A and option B, we naturally want to understand the potential outcomes and know their likelihoods. For example, John is buying a new laptop computer for $2500 and has to decide whether to purchase the additional extended warrantee. He can use probabilistic reasoning and pertinent data to help him make his decision. The reviews say that only one in 50 of this model have the kinds of difficulties covered by the extended three year warrantee, which costs $150. He can lay out the possible outcomes and their costs, and calculate the ‘utilities’ of his potential losses or gains:
computer failure (2%) | no computer failure (98%) | calculated utility | |
buys warrantee ($150) | saves $2350 ($47) | loses $150 ($147) | likely result ($100) |
declines warrantee ($0) | spends $2500 for new computer ($50) | saves $150 ($147) | likely result ($97) |
From a probabilistic perspective, a purely rational person would conclude that purchasing the extended warrantee is a poor choice. (And most independent reviewers like Consumer Reports advise against purchasing most of these extras.) However, so many customers buy the warrantee that selling these warrantees is highly profitable. Most people do not use System 2 thinking (see Kahneman) to calculate the economic utilities of a decision.
Unfortunately, clinicians tend to act as if patients carefully calculated potential risks and losses when making decisions. We don’t.
Even if we did, our intuitive or ‘gut’ reactions would frequently override our intellectual calculations. For example, many would decide that they are willing to spend the $150 for the ‘peace of mind’ that comes with knowing their potential loss is covered, because they are more comfortable with the very likely (98/100) loss of $150 than the much less likely (2/100) loss of $2500. (Admit it – that feels ‘right’ doesn’t it?) Almost the entire insurance industry, many successful sales techniques, and a great deal of gambling behavior depends on the fact that individual humans do not make decisions based on careful calculations of their potential gains and losses.
Furthermore, it is not clear how applicable this sort of calculation is for individual decisions. Risk/benefit analysis is based on known (or knowable) frequencies of outcomes in large populations or multiple repetitions, while individual decisions exist in a different context, where probability may be overshadowed by uncertainty. The insurance industry may thrive because it knows how to use its knowledge of population risks, but individuals have to make individual and often very important decisions where the outcome is unknown and unknowable. We shouldn’t pretend that statistics and probabilities are a big help.
The classic example is the coin toss, where a fair coin can be counted on to show heads and tails in equal numbers over a large number of tosses. Knowing that 1/2 of 10,000 tosses will be heads is of absolutely no value in predicting what will happen with the next individual toss. (When was the last time you saw a coin land on its edge, half heads and half tails?)
How might this look with clinical decisions?
On the way home from the computer store, John stops at his doctor’s office because his ear has been increasingly painful over the last two days. The doctor listens to his description of recent cold symptoms, then a plugged right ear, and now moderate pain in that ear, examines his ears, and tells him he has an ear infection with fluid behind the ear drum. John asks what his options are, and the doctor explains that 3 out of 4 ear infections like this are viral and will resolve on their own in 2-4 days, and that 1 out of 4 are bacterial and will last 1-2 weeks unless treated with antibiotics. John can do a complex risk (prolonged pain, perforated ear drum, cost of medication, side effects of medication) and benefit (avoiding antibiotic side effects, saving money, getting better faster) analysis, which will give him a ‘utility’ calculation, but collecting the data for this is pretty laborious, so he will probably use a common short cut and substitute a simpler problem for the complex one: he will reason that there is 1 in 4 chance of benefit and that the risk of (and cost) of treatment are very small.
Or John’s wife is told that the T-score on her DXA (bone density x-ray) is -2.6 which means that she has a 2.2% chance of a hip fracture in the next 4 years. (Actually, that is NOT what it means. It means that 22 out of 1000 women like her would experience a hip fracture in 4 years. She either will or she won’t.) She is offered the chance to take a bisphosphonate medication like alendronate. Even though 978/1000 untreated people like her will NOT experience a hip fracture in the next 4 years (and the alendronate will only reduce the number of fractures in 1000 women by about half, from 22 to 10), most patients in this situation are uncomfortable with their personal uncertain (but very low) risk of hip fractures and will opt for treatment.
As both clinicians and patients, we should be careful to recognize that the most careful calculation of the potential utilities of risks and benefits can suggest an optimal decisionfor a population, but it does NOTHING to help us address the uncertainty of what will happen to a given individual. Calculating risk (chance or likelihood) for a population does not reduce the uncertainty involved in individual decisions.