### On my inability to improve decision making

Summary: It’s been suggested that improving decision making is an important thing for altruists to focus on, and there are a wide variety of computer programs which aim to improve clinician decision making ability. Since I earn to give as a programmer making healthcare software, you might naively assume that some of the good I do is through improving clinician decision making. You would be wrong. I give an overview of the problem, and suggest that the problems which make improving medical decision making hard are general, and might suggest low-hanging fruit is rare in the field of decision support.

Against stupidity the gods themselves contend in vain. - Friedrich Schiller

The HITECH Act (part of the 2009 US stimulus bill) is expected to spend about $35 billion on increasing the adoption of electronic medical records. On the other side of the pond, the NHS’ troubled IT program ended up costing around £20 billion, up a mere order of magnitude from the original £2.3 billion estimate. An explicit cost-benefit analysis of decision support research would require a lot more careful analysis of these expenditures, but my goal is just to point out that the lack of results is not due to lack of trying. Decades of work and billions of dollars have been spent in this area. ### Efficiency In retrospect, I think one argument we could have used to predict the non-cost-effectiveness of these interventions is to ask why they haven’t already been invented. The pre-computer medical world is filled with checklists, and so if there was an easy way to detect mistyped prescriptions or diagnose bacterial infections, it would probably already be used. This is to make a sort of “efficiency” argument - if there is some easy way to improve decision making, it’s probably already been implemented. So when we’re examining proposed decision support techniques, we might want to ask why it hasn’t already been done. If we can’t pin it on a new disruptive technology or something similar, we might want be skeptical that the problem is really so easy to solve. ### Acknowledgements Brian Tomasik proofread an earlier version of this post. ### Works Cited Ash, Joan S., Marc Berg, and Enrico Coiera. "Some unintended consequences of information technology in health care: the nature of patient care information system-related errors." Journal of the American Medical Informatics Association 11.2 (2004): 104-112. http://171.67.114.118/content/11/2/104.full Hemens, Brian J., et al. "Computerized clinical decision support systems for drug prescribing and management: a decision-maker-researcher partnership systematic review." Implement Sci 6.1 (2011): 89. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3179735/ Reckmann, Margaret H., et al. "Does computerized provider order entry reduce prescribing errors for hospital inpatients? A systematic review." Journal of the American Medical Informatics Association 16.5 (2009): 613-623. Tiwari, Ruchi, et al. "Enhancements in healthcare information technology systems: customizing vendor-supplied clinical decision support for a high-risk patient population." Journal of the American Medical Informatics Association20.2 (2013): 377-380. http://171.67.114.118/content/20/2/377.abstract Williams, D. J. P. "Medication errors." JOURNAL-ROYAL COLLEGE OF PHYSICIANS OF EDINBURGH 37.4 (2007): 343. http://www.rcpe.ac.uk/journal/issue/journal_37_4/Williams.pdf ### Why Charities Might Differ in Effectiveness by Many Orders of Magnitude Summary: Brian has recently argued that because "flow-through" (second-order) effects are so uncertain, charities don't (on expectation) differ in their effectiveness by more than a couple orders of magnitude. I give some arguments here about why that might be wrong. ### 1. Why does anything differ by many orders of magnitude? Some cities are very big. Some are very small. This fact has probably never bothered you before. But when you look at how cities sizes stack up, it looks somewhat peculiar: Taken from Gibrat's Law for (All) Cities, Eeckhaut 2004. The X-axis is the size of the city, in (natural) logarithmic scale. The Y-axis corresponds to the density (fraction) of cities with that population. The peak is around the mark of 8 on the X-axis, which corresponds to$e^8\approx 3,000$people. You can see that the empirical sizes of cities almost perfectly matches a normal ("bell curve") distribution. What's the explanation for this? Is mayoral talent distributed exponentially? When deciding to move to a new city do people first take the log of the new city's size and then roll some normally-distributed dice? It turns out that this is solely due to dumb luck and mathematical inevitability. Suppose every city grows by a random amount each year. One year, it will grow 10%, the next 5%, the year after it will shrink by 2%. After these three years, the total change in population is $$1.10\cdot 1.05\cdot 0.98$$ As in the above graph, we take the log $$\log\left(1.10\cdot 1.05\cdot 0.98\right)$$ A property of logarithms you may remember is that$\log(a\cdot b)=\log a + \log b$. Rewriting (2) with this property gives $$\log 1.10+ \log 1.05+\log 0.98$$ The central limit theorem tells us that when you add a bunch of random things together, you'll end up with a normal distribution. We're clearly adding a bunch of random things together here, so we end up with the bell curve we see above. ### 2. Why charities might differ by many orders of magnitude Some of Brian's points are about how even if a charity is good in one dimension, it's not necessarily good in others (performance is "independent"). The point of the above is to demonstrate that we don't need dependence to have widely varying impacts. We just need a structure where people's talents are randomly distributed, but critically their talents have a multiplicative effect. There are some talents which obviously cause a multiplier. A charity's ability to handle logistics ("reduce overhead") will multiply the effectiveness of everything else they do. Their ability to increase the "denominator" of their intervention (number of bednets distributed, number of leaflets handed out, etc.) is another. PR skills, fundraising etc. all plausibly have a multiplicative impact. More controversially, some proxies for flow-through effects might have a multiplicative impact. Scientific output is probably more valuable in times of peace than in times of war. GDP increases are probably better when there's a fair and just government, instead of the new wealth going to a few plutocrats. Here's a simulation of charities' effectiveness with 10 dimensions, each uniformly drawn from the range [0,10]. The red line corresponds to Brian's scenario (where each dimension is independent) and as he describes effectiveness is very closely clustered around 50. But as the dimensions have more interactions, the effectiveness spreads out, until the purely multiplicative model (purple line) where charities differ by many orders of magnitude. ### 3. Picking winners Say that impact is the product of measurable, direct impacts and unmeasurable flow-through effects. Algebraically:$I=DF\$. By linearity of expectations
$$E[I]=E[DF]=E[D]E[F]$$
So if two charities differ by a factor of say 1,000 in their direct impact then their total impact would (on expectation) differ by 1,000 as well.

This isn't a perfect model. But I do think that it's not always correct to model impacts as a sum of iid variables, and there is a plausible case to be made that not only do charities differ "astronomically" but we can expect those differences even with our limited knowledge.

Acknowledgements

This post was obviously inspired by Brian, and I talked about it with Gina extensively. The log-normal proof is known as Gibrat's Law and is not due to me.