Clinical decision making (1)

Medicine has made great strides in the last 100 years. The Flexner Report (1910) marked a turning point, and after centuries of pre-scientific magic we progressed in less than 50 years to a culture of medical care based on professionalism and science. In the last 40 years we have mostly replaced physician paternalism with shared-decision-making (SDM) based on patient autonomy. During the last 20 years we have been gradually replacing intuition and tradition with evidence, and supplementing frequentist statistics withBayesian probabilities. (Briefly compared in this cartoon.) Does this mean that when you seek care in 2013 your symptoms, medical history, physical exam and testing are assessed in the light of the current best evidence and your clinician helps you make a rational and well-informed decision based on the most likely diagnoses and best treatments?  In my experience, we aren’t doing all that well.

There are four major pieces of this problem:

  • Collecting and staying current with clinical evidence is daunting. (According to adiscussion of information overload in the BMJ, to keep up to date on just cardiac imaging requires reading 30 scientific articles every week.) And it is getting worse.
  • Evaluating the quality and pertinence of information takes time and expertise and is the subject of lectures, articles and texts.
  • Collecting accurate patient data takes time and considerable expertise.
  • Informed decision making is hard and medical/mathematical numeracy is lacking.

This post will be one of a series using specific clinical examples from my practice to illustrate the last point.

Making medical decisions based on evidence and probability (rather than on magic, tradition, or physician authority and intuition) is hard.  Very hard.  So hard that it is surprisingly uncommon. We tend to depend instead on incomplete and outdated information stored in memory, learned practice patterns, mental shortcuts (heuristics) which are fallible and poorly suited for the complex decisions necessary in health care, and algorithms and protocols which are derived from population data but questionably applicable to individuals. We use what Daniel Kahneman calls System 1 processes rather than System 2 processes. (His classic book is here and a video of a lecture is here.)

The case of the confusing exercise tolerance test (ETT)

A 52 year old male has an ETT as pre-op eval for procedure which is (+) and is advised to have angiography. He calls me, his PCP, asking for a referral and I asked him to come in so we could discuss his test result and decide how to proceed. He - and his surgeon - express irritation at the delay for what seems obvious: he has a (+) stress test suggesting a high likelihood of CAD and needs expeditious angiography to stratify risk and perhaps treat a blockage before his surgery.  After all, the ETT is 85% sensitive and 75% specific for heart disease so a (+) test is associated with a high likelihood of disease, right? 

At the visit, we reviewed his status. He never smoked, has a BMI of 22, has lipids that are well controlled (LDL 72) on diet and a statin, has BPs of 120s/70s, exercises vigorously 45 minutes to an hour 5 times weekly without cardiac symptoms, and has no family history of premature CAD.  Other than his well controlled hyperlipidemia, his only medical problem is erectile dysfunction which responds well to Viagra. The published population prevalence of silent CAD in asymptomatic 50 year old males is about 10%. This  figure includes all comers, including those with multiple risk factors, so his pre-test probability of having silent heart disease is probably significantly lower than that. Assuming a prevalence of 10%,  the post test probability is about 27%, meaning that 1 out of 4 positive tests are true positives and the other 3 are false positives:  an abnormal test but no heart disease. With his more realistic pre-test probability of 5%, his post-test probability of having silent heart disease is only 15%.

When I was a medical student this approach (Bayesian pre-test and post-test probabilities) was not commonly used in medicine. Now I can make this calculation quickly and easily with my iPhone or iPad, but for years I had to calculate it manually with a ‘four-square’ table as follows:

In a population of 1000 52 year old men like my patient, we will assume that the ‘prevalence’ of silent heart disease is 5% meaning that 5/100 or 50/1000 have silent heart disease:

 prevalence  CAD present  CAD absent  TOTAL
 5/100 or 5%  50  950  1000


The test is 85% sensitive so it finds 85% of the 50 with heart disease or 42.5 patients and misses 15% of 50 or 7.5 patients. This lets us add another column t our table, representing positives:

   CAD present  CAD absent  TOTAL
 ETT +  42.5    
 ETT -  7.5    
 TOTAL  50 950  1000 


The ETT is 75% specific, so 75% of those with no disease (712.5 patients) have a negative test and 237.5 patients without disease have a (+) test.

  CAD present  CAD absent  TOTAL 
ETT +  42.5   237.5  
 ETT -  7.5 712.5   
TOTAL  50  950  1000 


Now we finish doing the arithmetic to make sure out totals add up:

  CAD present  CAD absent  TOTAL 
 ETT + 42.5   237.5 280 
 ETT -  7.5 712.5  720 
 TOTAL 50  950   1000


Looking at this table, we see that 42.5 out of 280 patients just like him with a (+) test actually have silent CAD. Fifteen percent. And that 237.5 out of 280 patients like him with a (+) test have no heart disease. Eighty-five percent. The (+) test tripled the probability that he has heart disease from 5/100 to 15/100, but three times a small number is still a small number.

(Note: This calculation was quick and intuitive, right? I thought not. Until all clinicians have ready access in exam rooms to the prevalence data for illnesses, to the sensitivity and specificity of tests,  calculators to do the work, are trained and comfortable using them,  patients are used to seeing this sort of discussion, and schedules allow time to think and talk, this is not going to happen often. Trust me.)

I explained that the (+) ETT test suggested his chance of having mild (asymptomatic and stable) CAD was probably about 15%. Further, we have no evidence that treating asymptomatic heart disease with procedures like stenting or surgery changes the outcome, decreases cardiac risk at surgery, or prevents sudden death. One could look at the test as showing that he was unlikely (15%) to have CAD, and if he did, doing a cardiac catheterization was unlikely to improve his health or the outcome of his surgery.

At this point he sat quietly for a few moments before his wife asked: ‘Then why did the surgeon want the ETT and why did the cardiologist suggest a cath to look for a partly blocked artery?’  Good questions, I thought as I told him it was possible that there was information they had that we were not considering so I would call them and discuss it and then get back to him.

The surgeon told me that he had read that ED was a marker for CAD so he asked all his male patients over 50 about ED and referred them for ETT if they had ED. (ED and CAD share the same risk factors: hypertension, diabetes, smoking, age, obesity. The populations overlap. The published data suggests it is reasonable in patients who present with ED to ask about and address other cardiac risk factors, but there is no evidence to support cardiac stress testing based on ED.)

The cardiologist told me that the patient had been referred to him ‘to rule our heart disease’ and, in order to do that, he had to do a stress test. (The table above shows that a negative stress test would have meant that the patient had only a 7.5 out of 720 probability - about 1% - of silent heart disease, very reassuring, indeed.) Once he had a (+) test, ‘my only option was to find out how bad it is and to treat whatever I find.’ When I pointed out the results of the actual risk calculation, he conceded that the test was probably a false (+) given the patient’s low risk and excellent exercise tolerance, and said that if the patient had been referred to him for a pre-surgical risk assessment he would have recommended against the test. However, he still felt a cath was appropriate because ‘now that he has a (+) ETT, if he has a heart attack during his surgery, it will be on me.’

Let’s review what happened: 

  • The surgeon was aware that there is an association of ED with other forms of vascular disease including CAD and in good faith but without good risk assessment, referred the patient for a test that was much more likely to give a false positive than a true positive. 
  • The cardiologist felt he was asked to do an ETT (rather than to actually assess the patient’s risk and decide if an ETT was necessary) and once the test was (+), risk aversiveness made him feel it was necessary to do a second (and more dangerous) test to prove that the initial result was a false (+).
  • The patient and his wife remained very anxious about the implications of the (+) ETT. They said they wished they had known more about this before the test was done, but that now that it was done, they ‘needed to know’ and elected to proceed with the diagnostic catheterization, which showed no evidence of CAD.
  • He had his surgery with no cardiac issues.

What I wish had happened:

  • The surgeon had suggested that the patient see his PCP to discuss whether or not his lipids and ED represented a significant risk that needed to be addressed before surgery.
  • The patient had come in to go over the pertinent data and have the risk discussion in order to decide what (if any) risk he had and how to evaluate it.

This incident involved fairly high risk decisions that happen relatively infrequently. In the next installment, I’ll talk about how this sort of decision making process (or its absence) can have an impact on the common problem of a sore throat in a four year old.




Links to more on this topic::