The Math Problem in The Good Doctor, and Why It Matters

I enjoy the television show The Good Doctor, as I enjoy most medical dramas.  This week’s episode, “Point Three Percent” makes an all-too-common math error.  This error has profound significance for criminal procedure, tort law, and scientific studies, because it changes whether we consider certain data significant.

 

In the episode of The Good Doctor, Dr. Shaun Murphy believes that a boy diagnosed with bone cancer may instead have a different, treatable disease.  Because the boy’s lesions were biopsied twice, the odds of two false positives are .3 percent, or 3/1000 cases.  Dr. Murphy then concludes that if 333 patients in the boy’s situation are re-tested, 332 will be given false hope for four hours, but 1 out of the 333 (.3%) will walk away having learned the correct diagnosis, that he does not have bone cancer.

This is a statistical fallacy.  A false positive rate measures how many people in a population of 100 people who don’t have bone cancer will test positive for the cancer anyway.  That rate, .3/100, is different than the number of people who have tested positive but may indeed be negative.  In other words, false positive rates measure the probability of testing positive for cancer given that someone is negative for cancer.  The not-false-hope measurement is probability that someone is negative given that he has tested positive.  Mathematically, they are different.  One is probability(testing positive/no cancer), and the other is probability(no cancer/testing positive).

Bayes’ Rule tells us that to figure out the likelihood that the boy who has already tested positive doesn’t have cancer requires knowing the base rate of people in the population with cancer.  For example, let’s say bone cancer occurs at a rate of 400/1000 (this number is too high).  So, for every 1000 people, 400 actually have bone cancer.  Of the 600 left who do not, .3% will appear to false positive, so 1.8 people  Thus, for every 401.8 people who test positive for bone cancer (assuming there are no false negatives), 1.8 of them will not have bone cancer.  So, given 1000 people who test positive for bone cancer, .4% will not have the bone cancer, not .3%.  This means less false hope and more people to learn they don’t actually have cancer.

Now, if bone cancer occurs at a rate of 200/1000 people, there will be 200 people who test positive for bone cancer because they have it, and .3% of the remaining 800 who false positive, or 2.4 people.  Then, for every 202.4 people who test positive, 2.4 will be false positives, or 1% of the people.  There is a much higher likelihood in this scenario that a given positive result is a false positive, because the cancer is so rare.  And indeed, the rarer the cancer, the more likely a positive result is a false positive instead of a true positive, in spite of a low false positive rate.  This means even more people will learn they don’t actually have cancer.

This is important.  Courts have messed up this basic premise of Bayes’ Rule, that you need to know the base rate of some underlying force (like incidence of cancer) to convert a false positive rate into the likelihood than any given positive result is valid.  Bayes’ Rule also means that drug sniffing dogs with low false positive rates don’t tell us much about whether a sniff alert for drugs actually means there are drugs in the car.  We would need to know the incidence of drugs in cars to figure out that number.  (I wrote a paper about that.)

Misunderstanding of Bayes’ Rule also means a lot of experiments that appear statistically valid are likely not valid, which is why so many psychological studies cannot be replicated.   We need to do a better job educating people in statistics, because it affects everything we think, everything we know, and everything we think we know.

 

 

4 thoughts on “The Math Problem in The Good Doctor, and Why It Matters”

  1. If I follow your math, then if the actual cancer rate is 1/101 and the false positive rate (positive test when healthy) is 3/100, then the false alarm rate (no cancer when testing positive) is 75% (!!). That is, the one actually sick person tests positive while three out of one hundred healthy people test positive. So, three out of four positive tests will be false alarms.

    I believe the general formula for the false alarm rate (Y) as a function of the false positive rate (X) is Y = (1-C)/P * X, where C is the rate of actual cancer in the population and P is the rate of positive tests in the population.

    Like

    1. Well, this is just Bayes’ rule, so if A = doesn’t have cancer and B= positive test result, then p{A/B) = p(B/A) x p(A) / p(B). Using your numbers, p(A/B) = false alarm rate = 3/100 x 100/101 / (4/101) (assuming all people with cancer test positive) = 75%. So, I think my method works. No?

      Like

Comments are closed.