On my inability to improve decision making

Summary: It’s been suggested that improving decision making is an important thing for altruists to focus on, and there are a wide variety of computer programs which aim to improve clinician decision making ability. Since I earn to give as a programmer making healthcare software, you might naively assume that some of the good I do is through improving clinician decision making. You would be wrong. I give an overview of the problem, and suggest that the problems which make improving medical decision making hard are general, and might suggest low-hanging fruit is rare in the field of decision support.

Against stupidity the gods themselves contend in vain. - Friedrich Schiller

In 1966, the Massachusetts General Hospital Utility Multi-Programming System (MUMPS) was created as one of the first healthcare information technology platforms. Running on the “cheap” ($70,000) PDP-7, it spread to become one of the most common pieces of infrastructure in healthcare - to this day, if you walk into your doctor’s office there’s a good chance some part of what you see has MUMPS in its stack.

A few years later, researchers at Stanford using a computer with the approximate power of today’s wristwatches created MYCIN, a program capable of outperforming human physicians in diagnosing bacterial infections. Unlike MUMPS, such programs are still far from use in everyday care today: when I go to the doctor’s office I’m not diagnosed by computerized super-doctors but instead by the time-honored combination of human gut, skill and the occasional glance at a reference volume. Even “low-skill” jobs like calling patients to remind them about their appointments are still usually done by receptionists or temps with a printed call list; a process essentially indistinguishable from 50 years ago.

If people are better at making decisions, then we will be better at a whole range of things, making decision-support technology an important priority for altruists. It was listed as one of 80,000 hours top priorities, for example. I haven’t seen many empirical examinations of how decision-making technology (fails to) improve our abilities, so I offer healthcare IT as a case study.

Different, not fewer, problems

Clinicians sometimes order the wrong thing. Perhaps they forget the dosing and accidentally order 200 miligrams instead of 200 micrograms, or they order penicillin because they forgot that the patient’s allergic.

It’s relatively easy to program a computer to warn the user when their prescription is off by an order of magnitude or contraindicates with an allergy, but it turns out that doctors are actually pretty good at what they do most of the time. If they order an unusually high dose, it’s probably because the patient has an unusually severe case. If they order a med that the patient is allergic to, it’s probably because they decided the benefits outweigh the risks. As a result, these warnings are almost always noise without a signal.

The result is familiar to anyone who used the version of Microsoft Office with Clippy: clinicians slam on the keyboard to close all message boxes without bothering to read the warnings, completely negating any possible benefits. This “alert fatigue” (as it is politely termed) sometimes stems from organization’s fears of lawsuits keeping extraneous alerts around (Tiwari et al. 2013), but even in trials which are done specifically to improve health and are judged successful enough to publish, less than a fourth have any impact on patient outcomes (Hemens et al. 2011).

GIGO

Anyone who’s done computer learning is aware of the maxim “garbage-in, garbage-out”. Even the most amazing prediction algorithm will give bad results if you give it bad input, and current medical algorithms are far from perfect.

Medical records are written of, by and for humans, and there is a large resistance to change. If your program requires someone with MD-equivalent skills to translate the patient’s free-text chart into a discrete dataset that the software could analyse, then why would you use it? You might as well just hire the doctor to do the diagnosis herself.

This problem is largely what’s held back programs like MYCIN. While they work great if your research grant provides for a grad student sweatshop to code data into your specialized format, it doesn’t work so well in the real world.
Doctor-Hardness
To summarize these two problems: people had originally thought they could slice off just a tiny piece of clinicians’ jobs and improve that without worrying about the rest. But it turned out that in order to do well in this tiny slice they needed to essentially replicate all of what a doctor does - in computer science terms, these problems are “doctor-hard”.

Cost

What have we spent to get these minimal benefits?

The NIH’s Biomedical Information Science and Technology initiative has funded about $350 million dollars worth of research (not all of it in clinical decision support), but this amount pales to to what governments have spent in getting IT into the hands of front-line physicians.

The HITECH Act (part of the 2009 US stimulus bill) is expected to spend about $35 billion on increasing the adoption of electronic medical records. On the other side of the pond, the NHS’ troubled IT program ended up costing around £20 billion, up a mere order of magnitude from the original £2.3 billion estimate.

An explicit cost-benefit analysis of decision support research would require a lot more careful analysis of these expenditures, but my goal is just to point out that the lack of results is not due to lack of trying. Decades of work and billions of dollars have been spent in this area.

Efficiency

In retrospect, I think one argument we could have used to predict the non-cost-effectiveness of these interventions is to ask why they haven’t already been invented. The pre-computer medical world is filled with checklists, and so if there was an easy way to detect mistyped prescriptions or diagnose bacterial infections, it would probably already be used.

This is to make a sort of “efficiency” argument - if there is some easy way to improve decision making, it’s probably already been implemented. So when we’re examining proposed decision support techniques, we might want to ask why it hasn’t already been done. If we can’t pin it on a new disruptive technology or something similar, we might want be skeptical that the problem is really so easy to solve.

Acknowledgements

Brian Tomasik proofread an earlier version of this post.

Works Cited

Ash, Joan S., Marc Berg, and Enrico Coiera. "Some unintended consequences of information technology in health care: the nature of patient care information system-related errors." Journal of the American Medical Informatics Association 11.2 (2004): 104-112. http://171.67.114.118/content/11/2/104.full

Hemens, Brian J., et al. "Computerized clinical decision support systems for drug prescribing and management: a decision-maker-researcher partnership systematic review." Implement Sci 6.1 (2011): 89. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3179735/

Reckmann, Margaret H., et al. "Does computerized provider order entry reduce prescribing errors for hospital inpatients? A systematic review." Journal of the American Medical Informatics Association 16.5 (2009): 613-623.

Tiwari, Ruchi, et al. "Enhancements in healthcare information technology systems: customizing vendor-supplied clinical decision support for a high-risk patient population." Journal of the American Medical Informatics Association20.2 (2013): 377-380. http://171.67.114.118/content/20/2/377.abstract

Williams, D. J. P. "Medication errors." JOURNAL-ROYAL COLLEGE OF PHYSICIANS OF EDINBURGH 37.4 (2007): 343. http://www.rcpe.ac.uk/journal/issue/journal_37_4/Williams.pdf

Why Charities Might Differ in Effectiveness by Many Orders of Magnitude


Summary: Brian has recently argued that because "flow-through" (second-order) effects are so uncertain, charities don't (on expectation) differ in their effectiveness by more than a couple orders of magnitude. I give some arguments here about why that might be wrong.

1. Why does anything differ by many orders of magnitude?

Some cities are very big. Some are very small. This fact has probably never bothered you before. But when you look at how cities sizes stack up, it looks somewhat peculiar:

Taken from Gibrat's Law for (All) Cities, Eeckhaut 2004.

The X-axis is the size of the city, in (natural) logarithmic scale. The Y-axis corresponds to the density (fraction) of cities with that population. The peak is around the mark of 8 on the X-axis, which corresponds to $e^8\approx 3,000$ people.

You can see that the empirical sizes of cities almost perfectly matches a normal ("bell curve") distribution. What's the explanation for this? Is mayoral talent distributed exponentially? When deciding to move to a new city do people first take the log of the new city's size and then roll some normally-distributed dice?

It turns out that this is solely due to dumb luck and mathematical inevitability.

Suppose every city grows by a random amount each year. One year, it will grow 10%, the next 5%, the year after it will shrink by 2%. After these three years, the total change in population is
$$1.10\cdot 1.05\cdot 0.98$$
As in the above graph, we take the log
$$\log\left(1.10\cdot 1.05\cdot 0.98\right)$$
A property of logarithms you may remember is that $\log(a\cdot b)=\log a + \log b$. Rewriting (2) with this property gives
$$\log 1.10+ \log 1.05+\log 0.98$$
The central limit theorem tells us that when you add a bunch of random things together, you'll end up with a normal distribution. We're clearly adding a bunch of random things together here, so we end up with the bell curve we see above.

2. Why charities might differ by many orders of magnitude

Some of Brian's points are about how even if a charity is good in one dimension, it's not necessarily good in others (performance is "independent"). The point of the above is to demonstrate that we don't need dependence to have widely varying impacts. We just need a structure where people's talents are randomly distributed, but critically their talents have a multiplicative effect.

There are some talents which obviously cause a multiplier. A charity's ability to handle logistics ("reduce overhead") will multiply the effectiveness of everything else they do. Their ability to increase the "denominator" of their intervention (number of bednets distributed, number of leaflets handed out, etc.) is another. PR skills, fundraising etc. all plausibly have a multiplicative impact.

More controversially, some proxies for flow-through effects might have a multiplicative impact. Scientific output is probably more valuable in times of peace than in times of war. GDP increases are probably better when there's a fair and just government, instead of the new wealth going to a few plutocrats.

Here's a simulation of charities' effectiveness with 10 dimensions, each uniformly drawn from the range [0,10].
The red line corresponds to Brian's scenario (where each dimension is independent) and as he describes effectiveness is very closely clustered around 50. But as the dimensions have more interactions, the effectiveness spreads out, until the purely multiplicative model (purple line) where charities differ by many orders of magnitude.

3. Picking winners

Say that impact is the product of measurable, direct impacts and unmeasurable flow-through effects. Algebraically: $I=DF$. By linearity of expectations
$$E[I]=E[DF]=E[D]E[F]$$
So if two charities differ by a factor of say 1,000 in their direct impact then their total impact would (on expectation) differ by 1,000 as well.

This isn't a perfect model. But I do think that it's not always correct to model impacts as a sum of iid variables, and there is a plausible case to be made that not only do charities differ "astronomically" but we can expect those differences even with our limited knowledge.

Acknowledgements

This post was obviously inspired by Brian, and I talked about it with Gina extensively. The log-normal proof is known as Gibrat's Law and is not due to me.