An Interactive Guide to Population Ethics

Population Ethics is the branch of philosophy which deals with questions involving - you guessed it - populations. Most of the problems that are solved by population ethics are things involving tradeoffs between quantity and quality of life. In bumper-sticker form, the question investigated in this post is:
Should we make more happy people, or more people happy?1
When a disaster occurs, most of us have the intuition that we should help improve the lives of survivors. But very few of us feel an obligation to have more children to offset the population loss. (i.e. our intuitions line up with making "more people happy" instead of "more happy people".) This is a surprisingly difficult position to defend, but it reminds me of Brian Tomasik's joke:
  • Bob: "Ouch, my stomach hurts."
  •  
  • Classical total utilitarian: "Don't worry! Wait while I create more happy people to make up for it."
  • Average utilitarian: "Never fear! Let me create more people with only mild stomach aches to improve the average."
  • Egalitarian: "I'm sorry to hear that. Here, let me give everyone else awful stomach aches too."
  • ...
  • Negative total utilitarian: "Here, take this medicine to make your stomach feel better."

Limiting theorems

It turns out that population ethics has, to a certain extent, been "solved". This is a technical result, so uninterested readers can skip to the next section, but basically the various questions I discuss in this blog post are the only questions remaining. Specifically:
Let $\mathbf u = \left(u_1,u_2,\dots\right)$ be the utilities of people $1,2,\dots$ and similarly let $\mathbf u' = \left(u_1',u_2',\dots\right)$ be the utilities of a different population. Further, suppose we have a "reasonable" way of defining which of two populations is better. Then there is a "value function" $V$ such that population $\mathbf u$ is preferable to population $\mathbf u'$ if and only if $V(\mathbf u) > V(\mathbf u')$. Furthermore, $V$ has the form: $$V(\mathbf u)=f(n)\sum_{i=1}^{n}\left[ g(u_i)-g(c)\right]$$
The three sections of the blog post concern:
  1. The concavity of $g$, which moderates our inequality aversion
  2. The value of $c$, which is known as the "critical level"
  3. And the form of $f$, which is the "number dampening"
I hope to write a post soon on why these are the only three remaining questions, but interested readers can see (Blackorby, Bossert and Donaldson, 2000) in the mean time.2

Inequality

In the wake of the financial crisis, movements like Occupy Wall Street raised wealth inequality as a major political issue.


Wealth inequality in the US

An intuition that underlies these concerns is that the worse off people are, the more important it is to help them. We might donate to a charity to help starving people eat, but not one which helps rich yuppies eat even fancier food. The formal way to model this is to state that one person's utility has diminishing returns to society's overall well-being (i.e. additional utility to that person benefits society less and less as they become better off).

$g(x)=\sqrt{x}$
(As in the rest of this post, you can use the slider to modify the function and see how changing $g$ affects our ethical choices.)

One way of visualizing the impact this has on our decisions about populations is to use an indifference curve. In the chart below, the x-axis represents the utility of person X and the y-axis the utility of person Y. Each line on the chart indicates a set of points for which we are indifferent - for example, the blue line includes the point (50,50) and the point (100,0) since if we don't believe that utility has diminishing returns we don't care about how utility is divided up between the populace. (50 + 50 = 100 + 0).

$g(x)=\sqrt{x}$
You can see that the stronger we think returns diminish, the more inequality-averse we become. For example, if $g(x)=\sqrt{x}$ we are indifferent between $(60,10)$ and $(100,0)$ since $\sqrt{60} + \sqrt{10}\approx \sqrt{100} + \sqrt{0}$, meaning that a 40-point increase in person X's welfare is needed to offset the 10-point loss in person Y's welfare, since Y's welfare is so low. This is an important point, so I'll call it out:
Inequality aversion is a conclusion of population ethics, not an assumption3

Interlude - The Representation of Populations

We've just shown a very non-trivial result: if $g$ is concave (meaning that increasing utility has diminishing returns), then we are inequality-averse. (Conversely, if $g$ were convex then we would be inequality-seeking, but I don't know of anyone who has argued this.) One problem we're going to run into soon is that there are too many variables to easily visualize. So I want to bring up a certain fact about population ethics:
For any population $u$, there is a population $u'$ such that:
  1. The number of people in $u$ and $u'$ are the same
  2. Everyone in $u'$ has the same utility as each other (i.e. $u'$ is "perfectly equitable")
  3. And we are indifferent between $u$ and $u'$
For example, if we believed utility did not have diminishing returns, we would be indifferent between $(75,25)$ and $(50,50)$ because the total utility is the same. This means that:
Any time we want to compare populations $p$ and $q$, we can instead compare $p'$ and $q'$ where both $p'$ and $q'$ are perfectly equitable (i.e. every person in $p'$ has the same utility as each other, and similarly for $q'$).
A perfectly equitable population can be parameterized by exactly two variables: the number of people in the population, and the average utility. While there are theoretical implications of this, the most relevant fact for us is that it means we can keep using two-dimensional graphs.

Critical Levels

Back to the topic at hand. The following assumption sounds very strange, but it's made quite frequently in the literature:
Even if your life is worth living to you and you don't influence anyone else, that doesn't mean the population as a whole benefits from your existence. Specifically, your welfare must be greater than a certain amount, known as the "critical level", before your existence benefits society.4
More formally:
Value to society = utility - critical level
Or $$V(\mathbf u)=\sum_{i=1}^{n} \left(u_i - c\right)$$ where $c$ is the critical level. (Note that $c$ is a constant, and independent of $\mathbf u$.) I think this is best illustrated with an example. Suppose we have a constant amount of utility, and we're wondering how many people to divide it up between. (As mentioned earlier, this is a perfectly equitable population, so everyone gets an equal share.) Here's how changing the critical level changes our opinion of the optimal population size:
c=10
The impact of critical levels can be summarized as:
Positive critical levels give a "penalty" for every person who's alive, whereas negative critical levels give a "bonus"
This is clear since $$V(\mathbf u)=\sum_{i=1}^{n} \left(u_i - c\right)=\left(\sum_{i=1}^{n} u_i\right)-nc$$ Here are indifference curves for different critical levels:
c=10
As the critical level gets lower, we are increasingly willing to decrease average utility in exchange for increasing the population size. The major motivation for having a positive critical level is that it avoids the mere addition paradox (sometimes known as the "Repugnant Conclusion"):
For any possible population of at least ten billion people, all with a very high quality of life, there must be some much larger imaginable population whose existence, if other things are equal, would be better even though its members have lives that are barely worth living.5

In tabular form:

PopulationSizeAverage UtilityTotal Value
(c=0)
Total Value
(c = )
A1,000100100,00090,000
B10,000,0000.11,000,000-99,000,000
C1,000-4-4,000-15,000
D100-1-100-1,100

Many people have the intuition that A is preferable to B. We can see that only by having a positive critical level can we make this intuition hold.

Unfortunately, we can also see that having a positive value of c results in what Arrhenius has called the "sadistic conclusion": We prefer population C to population B, even though everyone in C is suffering and the people in B have positive lives. And if c is negative we have another sort of sadistic conclusion: We prefer C to D even though there are fewer people suffering in D and no one is better off in C than they are in D.

Some people will bite the bullet and prefer the Sadistic Conclusion to the Repugnant one. But it's hard to make a case for this being the less intuitive of the two, meaning we must have a critical level of zero.

Number Dampening

Canadian philosopher Thomas Hurka has argued for the two following points:
  1. For small populations, we should care about total welfare
  2. For large populations, we should care about average welfare

Independent of the question about whether people should care more about average welfare for large populations, it seems clear that in practice we do (as I've discussed before).

The way to formalize this is to introduce a function $f$:

$$V(\mathbf u)=f(n)\sum_{i=1}^{n}u_i$$ where $$f(n) = \left\{ \begin{array}{lr} 1 & : n \leq n_0 \\ n_0/n & : n > n_0 \end{array} \right.$$ If we have fewer than $n_0$ people (i.e. if the population is "small") then this is equivalent to total utilitarianism. If we have more (i.e. the population is "large") then it's equivalent to average utilitarianism. Graphically:
n0=50
The non-differentiability at $n=n_0$ is pretty ridiculous though, so instead of a strict cutoff we could claim that there are diminishing returns to population size, just like we claimed that there are diminishing returns to utility in the first section. For example, we could state that $$V(\mathbf u)=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}u_i$$ This gives us a graph like:
$V(\mathbf u)=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}u_i$

Even with this modification though, it still seems pretty implausible that population size has diminishing returns. The relevant fact is that $\sqrt{x+y}\not=\sqrt{x}+\sqrt{y}$, so we can't just break populations apart.6 Therefore, we have to consider every single person who has ever lived (and who ever will live) before we can make ethical decisions. As an example of the odd behavior this "holistic" reasoning implies:

Some researchers are on the verge of discovering a cure for cancer. Just before completing their research, they learn that the population of humans 50,000 years ago was smaller than they thought. As a result, they drop their research to focus instead on having more children.

An example will explain why this is the correct behavior if you believe in number-dampening. Say we're using the value function

$$V(\mathbf u)=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}u_i$$

and we can either move everyone alive from having 10 utils up to 10.1 (discovering cancer cure) or else add a new person with utility 100 (have a child). Which option is best depends on the population size:

Population sizeValue of society w/ cancer cureValue of society w/ new child
500$\frac{1}{\sqrt{500}}\left(500\cdot 10.1\right)=226$$\frac{1}{\sqrt{501}}\left(500\cdot 10 + 100\right)=228$
5,000$\frac{1}{\sqrt{5000}}\left(5000\cdot 10.1\right)=714$$\frac{1}{\sqrt{5001}}\left(5000\cdot 10 + 100\right)=708$

Having a child is better if the population size is 500, but worse if the population size is 5,000.

It goes against our intuition that the population size in the distant past should affect our decisions about what to do today. One simple way around this is to just declare that "population size" is the number of people currently alive, not the people who have ever lived. Nick Beckstead's thesis has an interesting response:

The Separated Worlds: There are only two planets with life. These planets are outside of each other’s light cones. On each planet, people live good lives. Relative to each of these planets’ reference frames, the planets exist at the same time. But relative to the reference frame of some comet traveling at a great speed (relative to the reference frame of the planets), one planet is created and destroyed before the other is created.

To make this exact, let's say each planet has 1,000 people each with utility level 100. Then we have:

Dampening AmountValue on both planetsValue on comet
$1$$1$
None 200,000 200,000

How valuable a population is shouldn't change if you split it into arbitrary sub-populations, so it's hard to make the case for number dampening.

Conclusion

I started off by claiming (without proof) that for any "reasonable" way of determining which population is better, we could equivalently use a value function $V$ such that population $\mathbf u$ is better than population $\mathbf u'$ if and only if $V(\mathbf u) > V(\mathbf u')$. Furthermore, I claimed $V$ must have the form: $$V(\mathbf u)=f(n)\sum_{i=1}^n\left[g(u_i)-g(c)\right]$$ In this post, we investigated modifying $f,g$ and $c$. However, we saw that having $c$ be anything but zero leads to a "sadistic conclusion", and having $f$ be non-constant leads to the "Separated Worlds" problem, meaning that we conclude $V$ must be of the form $$V(\mathbf u) = \sum_{i=1}^n g(u_i)$$ Where $g$ is a continuous, monotonically increasing function. This is basically classical (or total) utilitarianism, with perhaps some inequality aversion.

It's common to view ethicists as people who just talk all day without making any progress on the issues, and to some extent this reputation is deserved. But in the area of population ethics, I hope I've convinced you that philosophers have made tremendous progress, to the point that one major question (the form of the value function) has been almost completely solved.

Footnotes

  1. I'm sure I didn't come up with this phrase, but I can't find who originally said it. I'd be much obliged to any commenters who can let me know.
  2. The obvious objection I'm ignoring here is the "person-affecting view", or "the slogan." I'm pretty skeptical of it, but it's worth pointing out that not all philosophers agree that population ethics must of this form.
  3. Of course, if we came to the conclusion that inequality is good, we might start questioning our assumptions, so this is perhaps not completely true.
  4. If the critical level is negative, then the converse holds (your life can suck but you'll still be a benefit to society). This is rarely argued.
  5. From Parfit's original Reasons and Persons
  6. This isn't just a problem with the square root - if $f(x+y)=f(x)+f(y)$ with $x,y\in\mathbb R$ then $f(x)=cx$ if $f$ is non-"pathological". (This is known as Cauchy's functional equation.)

Similar Posts

  1. An Improvement to "The Impossibility of a Satisfactory Population Ethics"
  2. Why Inequality Can't Matter

On my inability to improve decision making

Summary: It’s been suggested that improving decision making is an important thing for altruists to focus on, and there are a wide variety of computer programs which aim to improve clinician decision making ability. Since I earn to give as a programmer making healthcare software, you might naively assume that some of the good I do is through improving clinician decision making. You would be wrong. I give an overview of the problem, and suggest that the problems which make improving medical decision making hard are general, and might suggest low-hanging fruit is rare in the field of decision support.

Against stupidity the gods themselves contend in vain. - Friedrich Schiller

In 1966, the Massachusetts General Hospital Utility Multi-Programming System (MUMPS) was created as one of the first healthcare information technology platforms. Running on the “cheap” ($70,000) PDP-7, it spread to become one of the most common pieces of infrastructure in healthcare - to this day, if you walk into your doctor’s office there’s a good chance some part of what you see has MUMPS in its stack.

A few years later, researchers at Stanford using a computer with the approximate power of today’s wristwatches created MYCIN, a program capable of outperforming human physicians in diagnosing bacterial infections. Unlike MUMPS, such programs are still far from use in everyday care today: when I go to the doctor’s office I’m not diagnosed by computerized super-doctors but instead by the time-honored combination of human gut, skill and the occasional glance at a reference volume. Even “low-skill” jobs like calling patients to remind them about their appointments are still usually done by receptionists or temps with a printed call list; a process essentially indistinguishable from 50 years ago.

If people are better at making decisions, then we will be better at a whole range of things, making decision-support technology an important priority for altruists. It was listed as one of 80,000 hours top priorities, for example. I haven’t seen many empirical examinations of how decision-making technology (fails to) improve our abilities, so I offer healthcare IT as a case study.

Different, not fewer, problems

Clinicians sometimes order the wrong thing. Perhaps they forget the dosing and accidentally order 200 miligrams instead of 200 micrograms, or they order penicillin because they forgot that the patient’s allergic.

It’s relatively easy to program a computer to warn the user when their prescription is off by an order of magnitude or contraindicates with an allergy, but it turns out that doctors are actually pretty good at what they do most of the time. If they order an unusually high dose, it’s probably because the patient has an unusually severe case. If they order a med that the patient is allergic to, it’s probably because they decided the benefits outweigh the risks. As a result, these warnings are almost always noise without a signal.

The result is familiar to anyone who used the version of Microsoft Office with Clippy: clinicians slam on the keyboard to close all message boxes without bothering to read the warnings, completely negating any possible benefits. This “alert fatigue” (as it is politely termed) sometimes stems from organization’s fears of lawsuits keeping extraneous alerts around (Tiwari et al. 2013), but even in trials which are done specifically to improve health and are judged successful enough to publish, less than a fourth have any impact on patient outcomes (Hemens et al. 2011).

GIGO

Anyone who’s done computer learning is aware of the maxim “garbage-in, garbage-out”. Even the most amazing prediction algorithm will give bad results if you give it bad input, and current medical algorithms are far from perfect.

Medical records are written of, by and for humans, and there is a large resistance to change. If your program requires someone with MD-equivalent skills to translate the patient’s free-text chart into a discrete dataset that the software could analyse, then why would you use it? You might as well just hire the doctor to do the diagnosis herself.

This problem is largely what’s held back programs like MYCIN. While they work great if your research grant provides for a grad student sweatshop to code data into your specialized format, it doesn’t work so well in the real world.
Doctor-Hardness
To summarize these two problems: people had originally thought they could slice off just a tiny piece of clinicians’ jobs and improve that without worrying about the rest. But it turned out that in order to do well in this tiny slice they needed to essentially replicate all of what a doctor does - in computer science terms, these problems are “doctor-hard”.

Cost

What have we spent to get these minimal benefits?

The NIH’s Biomedical Information Science and Technology initiative has funded about $350 million dollars worth of research (not all of it in clinical decision support), but this amount pales to to what governments have spent in getting IT into the hands of front-line physicians.

The HITECH Act (part of the 2009 US stimulus bill) is expected to spend about $35 billion on increasing the adoption of electronic medical records. On the other side of the pond, the NHS’ troubled IT program ended up costing around £20 billion, up a mere order of magnitude from the original £2.3 billion estimate.

An explicit cost-benefit analysis of decision support research would require a lot more careful analysis of these expenditures, but my goal is just to point out that the lack of results is not due to lack of trying. Decades of work and billions of dollars have been spent in this area.

Efficiency

In retrospect, I think one argument we could have used to predict the non-cost-effectiveness of these interventions is to ask why they haven’t already been invented. The pre-computer medical world is filled with checklists, and so if there was an easy way to detect mistyped prescriptions or diagnose bacterial infections, it would probably already be used.

This is to make a sort of “efficiency” argument - if there is some easy way to improve decision making, it’s probably already been implemented. So when we’re examining proposed decision support techniques, we might want to ask why it hasn’t already been done. If we can’t pin it on a new disruptive technology or something similar, we might want be skeptical that the problem is really so easy to solve.

Acknowledgements

Brian Tomasik proofread an earlier version of this post.

Works Cited

Ash, Joan S., Marc Berg, and Enrico Coiera. "Some unintended consequences of information technology in health care: the nature of patient care information system-related errors." Journal of the American Medical Informatics Association 11.2 (2004): 104-112. http://171.67.114.118/content/11/2/104.full

Hemens, Brian J., et al. "Computerized clinical decision support systems for drug prescribing and management: a decision-maker-researcher partnership systematic review." Implement Sci 6.1 (2011): 89. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3179735/

Reckmann, Margaret H., et al. "Does computerized provider order entry reduce prescribing errors for hospital inpatients? A systematic review." Journal of the American Medical Informatics Association 16.5 (2009): 613-623.

Tiwari, Ruchi, et al. "Enhancements in healthcare information technology systems: customizing vendor-supplied clinical decision support for a high-risk patient population." Journal of the American Medical Informatics Association20.2 (2013): 377-380. http://171.67.114.118/content/20/2/377.abstract

Williams, D. J. P. "Medication errors." JOURNAL-ROYAL COLLEGE OF PHYSICIANS OF EDINBURGH 37.4 (2007): 343. http://www.rcpe.ac.uk/journal/issue/journal_37_4/Williams.pdf

Why Charities Might Differ in Effectiveness by Many Orders of Magnitude


Summary: Brian has recently argued that because "flow-through" (second-order) effects are so uncertain, charities don't (on expectation) differ in their effectiveness by more than a couple orders of magnitude. I give some arguments here about why that might be wrong.

1. Why does anything differ by many orders of magnitude?

Some cities are very big. Some are very small. This fact has probably never bothered you before. But when you look at how cities sizes stack up, it looks somewhat peculiar:

Taken from Gibrat's Law for (All) Cities, Eeckhaut 2004.

The X-axis is the size of the city, in (natural) logarithmic scale. The Y-axis corresponds to the density (fraction) of cities with that population. The peak is around the mark of 8 on the X-axis, which corresponds to $e^8\approx 3,000$ people.

You can see that the empirical sizes of cities almost perfectly matches a normal ("bell curve") distribution. What's the explanation for this? Is mayoral talent distributed exponentially? When deciding to move to a new city do people first take the log of the new city's size and then roll some normally-distributed dice?

It turns out that this is solely due to dumb luck and mathematical inevitability.

Suppose every city grows by a random amount each year. One year, it will grow 10%, the next 5%, the year after it will shrink by 2%. After these three years, the total change in population is
$$1.10\cdot 1.05\cdot 0.98$$
As in the above graph, we take the log
$$\log\left(1.10\cdot 1.05\cdot 0.98\right)$$
A property of logarithms you may remember is that $\log(a\cdot b)=\log a + \log b$. Rewriting (2) with this property gives
$$\log 1.10+ \log 1.05+\log 0.98$$
The central limit theorem tells us that when you add a bunch of random things together, you'll end up with a normal distribution. We're clearly adding a bunch of random things together here, so we end up with the bell curve we see above.

2. Why charities might differ by many orders of magnitude

Some of Brian's points are about how even if a charity is good in one dimension, it's not necessarily good in others (performance is "independent"). The point of the above is to demonstrate that we don't need dependence to have widely varying impacts. We just need a structure where people's talents are randomly distributed, but critically their talents have a multiplicative effect.

There are some talents which obviously cause a multiplier. A charity's ability to handle logistics ("reduce overhead") will multiply the effectiveness of everything else they do. Their ability to increase the "denominator" of their intervention (number of bednets distributed, number of leaflets handed out, etc.) is another. PR skills, fundraising etc. all plausibly have a multiplicative impact.

More controversially, some proxies for flow-through effects might have a multiplicative impact. Scientific output is probably more valuable in times of peace than in times of war. GDP increases are probably better when there's a fair and just government, instead of the new wealth going to a few plutocrats.

Here's a simulation of charities' effectiveness with 10 dimensions, each uniformly drawn from the range [0,10].
The red line corresponds to Brian's scenario (where each dimension is independent) and as he describes effectiveness is very closely clustered around 50. But as the dimensions have more interactions, the effectiveness spreads out, until the purely multiplicative model (purple line) where charities differ by many orders of magnitude.

3. Picking winners

Say that impact is the product of measurable, direct impacts and unmeasurable flow-through effects. Algebraically: $I=DF$. By linearity of expectations
$$E[I]=E[DF]=E[D]E[F]$$
So if two charities differ by a factor of say 1,000 in their direct impact then their total impact would (on expectation) differ by 1,000 as well.

This isn't a perfect model. But I do think that it's not always correct to model impacts as a sum of iid variables, and there is a plausible case to be made that not only do charities differ "astronomically" but we can expect those differences even with our limited knowledge.

Acknowledgements

This post was obviously inspired by Brian, and I talked about it with Gina extensively. The log-normal proof is known as Gibrat's Law and is not due to me.

Predictions of ACE's surveying results


Carl Shulman is polling people about their predictions for the results of the upcoming ACE study to encourage less biased interpretations. Here are mine.

Assuming control group follows the data in e.g. the Iowa Women's Health Study they should eat 166g meat/day with sd 66g.1 (For the rest of this post, I'm going to assume everything is normally distributed, even though I realize that's not completely true.)

For mathematical ease, let's take our prior from the farm sanctuary study and say: 2% are now veg, and an additional 5% eat "a lot less" meat which I'll define as cutting in half. So the mean of this group is 159g (4.2% less) w/ sd 69g.

I don't know what tests they will do, but let's look at a t-test because that's easiest. The test statistic here is:
$$t=\frac{166-159}{\sqrt{\frac{66}{N_1}+\frac{69}{N_2}}}$$
Let's assume 5% of those surveyed were in the intervention group. Solving for $N$ in
$$1.96=\frac{7}{\sqrt{\frac{66}{.95N}+\frac{69}{.05N}}}$$
we find $N\approx 350$, meaning that I expect the null hypothesis to be rejected at the usual $\alpha=.05$ if they collected at least 350 survey responses.2 I'm leaning slightly towards it not being significant, but I'm not sure how much data they collected.

Here's my estimate of their estimate (I can't do this analytically, so this is based on simulations):
You can see that the expected outcome is the true difference of about 4 veg equivalents per 100 leaflets, but with such a small sample size there is a 25% chance that we'll find leafleted people were less likely to go veg.

Here's how a 50% confidence interval might shake out:

The left graph is the bottom of the CI, the right one is the top.

Putting Money where my Mouth Is

The point of this is so that I don't retro-justify my beliefs, which is that meta-research in animal-related fields is the most effective thing. I have a lot of model uncertainty, but I would broadly endorse the conclusions of the above. The following represent ~2.5% probability events (each), which I will take as evidence I'm wrong.
  • If a 50% CI is exclusively above 9 veg equivalents per 100 leaflets, then I think its ability to attract people to veganism outweighs the knowledge we'd gain from more studies. Therefore, I pledge $1,000 to VO or THL (or whatever top-ranked leafleting charity exists at the time).
  • If a 50% CI is exclusively below zero, then veg interventions in general are less useful than I thought. Therefore I pledge $1,000 to MIRI (or another x-risk charity, if e.g. GiveWell Labs has a recommendation by then).
I don't think my above model is completely correct, and I'm sure ACE will have a different parameterization, so I don't know that these are really the 5% tails, but I would consider either of them to be a surprising enough event that my current beliefs are probably wrong.

I am open to friendly charity bets (if result is worse than X I give money to your charity, else you give to mine), if anyone else is interested.

Footnotes
  1. I tried to use MLE to combine multiple analyses, but found that the standard deviation is > 10,000 g/day. It's a good thing ACE has professional statisticians on the job, because the data clearly is kind of complex.
  2. I used $d.f.=\infty$

An Improvement to "The Impossibility of a Satisfactory Population Ethics"


Gustaf Arrhenius has published a series of impossibility theorems involving ethics. His most recent is The Impossibility of a Satisfactory Population Ethics which basically shows that several intuitive premises yield a stronger version of the repugnant conclusion.

If you know me, you know that I believe that modern ("abstract") algebra can help resolve problems in ethics. This is one example: using some basic algebra, we can get a stronger result than Arrhenius while using weaker axioms.

This is a "standing on the shoulders of giants" type of result: mathematicians have had centuries to trim their axioms to the minimal required set, so once you're able to phrase your question in more standard notation you can quickly arrive at better conclusions. Similarly, the errors in Arrhenius' proof that I've noted in the footnotes are mostly errors of omission that many extremely smart people made, until others pointed out pathological cases where their assumptions were invalid.

Assumptions


We assume that it's possible to have lives that are worth living ("positive" welfare), lives not worth living ("negative" welfare) and ones on the margin ("neutral" welfare). Arrhenius doesn't specify what the relationship is between "positive" and "negative" welfare, but I think there's a very intuitive answer: they cancel each other out. Just as $(+1) + (-1) = 0$, a world with a person of $+1$ utility and one with $-1$ utility is equivalent to a world with people at the neutral level.1

We continue the analogy with addition by writing $Z=X+Y$ if $Z$ is the union of two populations $X$ and $Y$. Just as with normal addition, we assume that $X+Y$ is always defined2 and that we can move parentheses around however we want, i.e. $(X+Y)+Z=X+(Y+Z)$. Lastly, I'm going to assume that the order in which you add people doesn't matter, i.e. $X+Y=Y+X$.3 I will finish the analogy with addition by specifying that welfare is isomorphic to the integers.4

(The above is just a long-winded way of saying that population ethics is isomorphic to the free abelian group on $\mathbb Z$.)

Also, for simplicity, I will write $nX$ for $\underbrace{X+\dots+X}_{n\ times}$.5

Lastly, we need to define our ordering. I'll use the notation that $X\leq Y$ means "Population $X$ is morally worse than population $Y$" and require that $\leq$ is a quasi-order, i.e. $X\leq X$ and $X\leq Y, Y\leq Z$ implies that $X\leq Z$. Notably, this does not require us to believe that populations are totally ordered, i.e. there may be cases where we aren't sure which population is better.

The major controversial assumption we need from Arrhenius is what he calls "non-elitism": for any $X,Y$ with $X-1>Y$ there is an $n>0$ such that for any population $D$ consisting of people with welfare levels between $X$ and $Y$: $(n+1)(X-1)+D\geq X+nY+D$. In less formal terms, this is basically saying that there are no "infinitely good" welfare levels.

Claim


We claim that any group following the above axioms results in:
The Very Repugnant Conclusion: For any perfectly equal population
with very high positive welfare, and for any number of lives with very
negative welfare, there is a population consisting of the lives with negative welfare and lives with very low positive welfare which is better than the high welfare population, all things being equal.

Unused Assumptions


The following are assumptions Arrhenius makes which are unused. (Note: these are verbatim quotes from his paper, unlike the other assumptions.)

(Exercise for the advanced reader: figure out which of these also follow from the assumptions we did use.)
  1. The Egalitarian Dominance Condition: If population A is a perfectly
    equal population of the same size as population B, and every person in
    A has higher welfare than every person in B, then A is better than B,
    other things being equal.
  2. The General Non-Extreme Priority Condition: There is a number n
    of lives such that for any population X, and any welfare level A, a
    population consisting of the X-lives, n lives with very high welfare, and
    one life with welfare A, is at least as good as a population consisting
    of the X-lives, n lives with very low positive welfare, and one life with
    welfare slightly above A, other things being equal.
  3. The Weak Non-Sadism Condition: There is a negative welfare level and
    a number of lives at this level such that an addition of any number of
    people with positive welfare is at least as good as an addition of the
    lives with negative welfare, other things being equal.

Proof

Lemma

First we prove a lemma: what Arrhenius calls "Condition $\beta$" and what mathematicians would refer to as a proof that our group is Archimedean. This means that for any $X,Y>0$ there is an $n$ such that $nX\geq Y$.

Basically we just observe that the "non-elitism" condition makes a simple induction. Starting from the premise that $(n+1)(X-1)+D\geq X+nY+D$, let $Y, D=0$, giving us that $(n+1)(X-1)\geq X$, i.e. $X$ is Archimedean with respect to $X-1$. Continuing the induction we find that $X$ is Archimedean with respect to $X-k$, completing the proof.6,7

Theorem

First, let me give a formal definition of the "Very Repugnant Conclusion": For any high level of welfare $H$, low positive level of welfare $L$ and negative level of welfare $-N$ and population sizes $c_{H},c_{N}$ there is some $c_{L}$ such that $c_{L}\cdot L+c_{N}\cdot(-N)\geq c_{H}H$.

To prove our claim: we know there is some $k_{1}$ such that
$$k_{1}\cdot L\geq c_{H}\cdot H\label{ref1}$$
because of our lemma. Because it's a group, we know that $(N+-N)+L=L$ and moreover $(c_{N}N+c_{N}\cdot-N)+L=L$. Substituting this into (1) yields
$$k_{1}\left[\left(c_{N}N+c_{N}\cdot-N\right)+L\right]\geq c_{H}H\label{ref2}$$
Expanding the left hand side of (2) we get
$$k_{1}c_{N}N+k_{1}c_{N}\cdot(-N)+k_{1}L\label{ref3}$$
By our lemma there is some $k_{2}$ such that $k_{2}L+D\geq k_{1}c_{N}N+D$; letting $D=k_{1}c_{N}(-N)+k_{1}L$ and using transitivity we get that
$$k_{2}L+k_{1}c_{N}(-N)+k_{1}L\geq c_{H}H$$
Rewriting terms leaves us with
$$\left(k_{1}+k_{2}\right)L+k_{1}c_{N}(-N)\geq c_{H}H$$
or
$$c_L L+c_{N'}(-N)\geq c_{H}H$$
$\blacksquare$

Comments


I don't know that this shorter proof is much more convincing than Arrhenius' - my guess is that the people who disagree with an assumption are those who take a "person-affecting" view or otherwise object to the entire premise of the theorem. I would though say that:
  1. None of the math I've used is beyond the average high-school student. It's just making the "algebra can be about things other than numbers" leap which is hard.
  2. While abstract algebraic notation can be intimidating, it's relevant to realize that using it makes you more concise. (To the extent that a 26-page paper can be rewritten into a two-page blog post.)
  3. Because we can be more concise and use standard terminology, it shines a light on what is really the controversial assumption: Non-Elitism.
  4. Similarly, because we use standard concepts it's easier to see missing assumptions (e.g. I didn't realize that Arrhenius was missing a closure axiom until I tried to cast it in group theory terms).
Lastly, because I can't finish any post without mentioning lattice theory, I'll add that some of the errors in Arrhenius' paper occurred because lattices are such a natural structure that he assumed they exist even where they weren't shown to. Of course, if you involve lattices more you end up with total utilitarianism, giving more insight into why Arrhenius' result holds.

Acknowledgements


I would like to thank Prof. Arrhenius for the idea, and Nick Beckstead for talking about it with me.

Footnotes

  1. Formally, for each $X$ there is some $-X$ such that for all $Y$, $X+(-X)+Y=Y$.
  2. This isn't an explicit assumption in Arrhenius, but it's implicitly assumed just about everywhere
  3. This arguably is controversial so I'll point out that commutativity isn't really required, but since it keeps the proof a lot shorter and most people will accept it, I'll keep the assumption
  4. Arrhenius "proves" that welfare is order-isomorphic to $\mathbb Z$ incorrectly, so I'll just assume it instead of attempting to derive it from others. If you prefer, you can take his "Discreteness" axiom, add in assumptions that welfare is totally ordered and has no least or greatest element and you'll get the same thing.
  5. Which is just to say that since it's an abelian group it's also a $\mathbb Z$-module.
  6. Nick Beckstead thought that some people might not like using the neutral level like this, so I'll point out that you can use an alternative proof at the expense of an additional axiom. If you assume non-sadism, then you can find that $X+nY\geq X$ and therefore transitively $(n+1)(X-1)\geq X$.
  7. This is somewhat misleading: we've only shown that the group is archimedean for totally equitable populations. That's all we need though.

How Conscious is my Relationship?

One of the most interesting theories of consciousness is Integrated Information Theory (IIT), proposed by Giulio Tononi. One of its more radical claims is that consciousness is a spectrum, and that virtually everything in the universe from the smallest atom to the largest galaxy has at least some amount of consciousness.

Whatever criticisms one can make of IIT, the fact that it allows you to sit down and calculate how conscious a system is represents a fundamental advance in psychology. Since people say that good communication is the most important part of a relationship, and since any information-bearing system's consciousness can be calculated with IIT, I thought it would be fun to calculate how conscious Gina and my's relationship is.

A Crash Course on Information

Entropy
The fundamental measure of information is surprise. The news could be filled with stories about how gravity remains constant, the sun rose from the east instead of the west and the moon continues to orbit the earth, but there is essentially zero surprise in these stories, and hence no information. If the moon were to escape earth's orbit we would all be shocked, and hence get a lot of information from this.

Written words have information too. If I forget to type the last letter of this phras, you can probably still guess it, meaning that trailing 'e' carries little surprise/information. Claude Shannon, founder of information theory, did precisely this experiment, covering up parts of words and seeing how well one could guess the remainder. (English has around 1 bit of information per letter, for the record.)

Whatever you're dealing with the important part to remember is that "surprise" is when a low-probability event occurs, and that "information" is proportional to "surprise". Systems which can be predicted very well in advance, such as whether the sun rises from the east or the west, have very low surprise on average. Those which cannot be predicted, such as the toss of a coin, have much more surprising outcomes. (Maximally surprising probability distributions are those where every event is equally likely.) The measure of how surprising a system is (and hence how much information the system has) was named Entropy by Shannon based on von Neumann's advice that "no one knows what entropy really is, so in a debate you will always have the advantage".

Divergence
Someone who knows modern English will have a bit more surprise than usual upon reading Shakespeare - words starting with "th" will end in "ou" more often than one would expect, but overall it's not too bad. Chaucer's Canterbury tales one can struggle through with difficulty, and Caedmon (the oldest known English poem) is so unfamiliar the letters are essentially unpredictable:
nu scylun hergan hefaenricaes uard
metudæs maecti end his modgidanc
uerc uuldurfadur swe he uundra gihwaes
eci dryctin or astelidæ
- first four lines of Caedmon. Yes, this is considered "English".
If we approximate the frequency of letters in Shakespeare based on our knowledge of modern English we won't get it too wrong (i.e. we won't frequently be surprised). But our approximation of Caedmon from modern English is horrific - we're surprised that 'u' is followed by 'u' in "uundra" and that 'd' is followed by 'æ' in "astelidæ".

Since you can make a good estimate of letter's frequencies in Shakespeare based on modern English, that means Shakespearean English and modern English have a low divergence. The fact that we're so frequently described when reading Caedmon means that the probability distribution there is highly divergent from modern English.

Consciousness

Believe it or not, Entropy and Divergence are the tools we need to calculate a system's consciousness. Roughly, we want to approximate a system's behavior by assuming that its constituent parts behave independently. The worse that approximation is, the more "integrated" we say the system is. Knowing that, we can derive its Phi, the measure of its consciousness.

Our Relationship as a Conscious Being

Here is a completely unscientific measure of mine and Gina's behavior over the last day or so:

The (i,j) entry is the fraction of time that I was doing activity i and Gina was doing activity j. (The marginal distributions are written, appropriately enough, in the margins.)

You can see that my entropy is 1.49 bits, while Gina (being the unpredictable radical she is) has 1.69 bits. This means that our lives are slightly less surprising than the result of two coin tosses (I can hear the tabloids knocking already).

However, our behavior is highly integrated: like many couples in which one person is loud and the other is a light sleeper, we're awake at the same time, and our shared hatred of driving means we only travel to see friends as a pair. Here's how it would look if we didn't coordinate our actions (i.e. assuming independence):

The divergence between these two distributions is our relationship's consciousness (Phi). Some not-terribly-interesting computations show that Phi = 1.49 bits.

The Pauli exclusion principle tells us that electrons in the innermost shell have 1 bit of consciousness (i.e. Phi = 1), meaning that our relationship is about as sentient as the average helium atom. So if we do decide to break up, the murder of our relationship won't be much of a crime.

Side Notes

Obviously this is a little tongue-in-cheek, but one important thing you might wonder is why my decision to consider our relationship to have two components (me and Gina) is the correct one. Wouldn't it be better to assume that there are 200 billion elements (one for each neuron in our brains) or even 1028 (one for each atom in our bodies)?

The answer is that yes, that would be better (apart from the obvious computational difficulties). IIT says that consciousness occurs at the level of the system with the highest value of Phi, so if we performed the computation correctly, we would of course find that it's Gina and myself who are conscious, not our relationship, since we have higher values of Phi.

(The commitment-phobic will notice a downside to this principle: if your relationship becomes so complex and integrated that its value of Phi exceeds your own, you and your partner would lose individual consciousness and become one joint entity!)

I should also note that I've discussed IIT's description of the quantity of consciousness, but not its definition of quality of consciousness.

Conclusion

Our beliefs about consciousness are so contradictory it's impossible for any rigorous theory to support them all, and IIT does not disappoint on the "surprising conclusions" front. But some of its predictions have been confirmed by evidence (the areas of the brain with highest values of Phi are more linked to phenomenal consciousness, for example) and the fact that it can even make empirical predictions makes it an important step forward. I'll close with Tononi's description of how IIT changes our perspective on physics:
We are by now used to considering the universe as a vast empty space that contains enormous conglomerations of mass, charge, and energy—giant bright entities (where brightness reflects energy or mass) from planets to stars to galaxies. In this view (that is, in terms of mass, charge, or energy), each of us constitutes an extremely small, dim portion of what exists—indeed, hardly more than a speck of dust.

However, if consciousness (i.e., integrated information) exists as a fundamental property, an equally valid view of the universe is this: a vast empty space that contains mostly nothing, and occasionally just specks of integrated information (Φ)—mere dust, indeed—even there where the mass-charge–energy perspective reveals huge conglomerates. On the other hand, one small corner of the known universe contains a remarkable concentration of extremely bright entities (where brightness reflects high Φ), orders of magnitude brighter than anything around them. Each bright “Φ-star” is the main complex of an individual human being (and most likely, of individual animals). I argue that such Φ-centric view is at least as valid as that of a universe dominated by mass, charge, and energy. In fact, it may be more valid, since to be highly conscious (to have high Φ) implies that there is something it is like to be you, whereas if you just have high mass, charge, or energy, there may be little or nothing it is like to be you. From this standpoint, it would seem that entities with high Φ exist in a stronger sense than entities of high mass.

Acknowledgements

The idea for this post came from Brian's essay on Suffering Subroutines, and the basis for my description of IIT came from Tononi's Consciousness as Integrated Information: a Provisional Manifesto. Gina read an earlier draft of this post.

A Pure Math Argument for Total Utilitarianism

Addition is a very special operation. Despite the wide variety of esoteric mathematical objects known to us today, none of them have the basic desirable properties of grade-school arithmetic.

This fact was intuited by 19th century philosophers in the development of what we now call "total" utilitarianism. In this ethical system, we can assign each person a real number to indicate their welfare, and the value of an entire population is the sum of each individuals' welfare.

Using modern mathematics, we can now prove the intuition of Mills and Bentham: because addition is so special, any ethical system which is in a certain technical sense "reasonable" is equivalent to total utilitarianism.

What do we mean by ethics?


The most basic premise is that we have some way of ordering individual lives.

We don't need to say how much better some life is than another, we just need to be able to put them in order. We might have some uncertainty as to which of two lives is better:


In this case, we aren't certain if "Medium" or "Medium 2" is better. However, we know they're both better than "Bad" and worse than "Good".

In the case when we always know which of two lives is better, we say that lives are totally ordered. If there is uncertainty, we say they are lattice ordered.

In either case, we require that the ranking remain consistent when we add people to the population. Here we add a person of "Medium" utility to each population:


The ranking on the right side of the figure above is legitimate because it keeps the order - if some life X is worse than Y, then (X + Medium) is still worse than (Y + Medium). This ranking below for example would fail that:


This ranking is inconsistent because it sometimes says that "Bad" is worse than "Medium" and other times says "Bad" is better than "Medium". A basic principle of ethics is that rankings should be consistent, and so rankings like the latter are excluded.

Increasing population size


The most obvious way of defining an ethics of populations is to just take an ordering of individual lives and "glue them together" in an order-preserving way, like I did above. This generates what mathematicians would call the free group. (The only tricky part is that we need good and bad lives to "cancel out", something which I've talked about before.)

It turns out that merely gluing populations together in this way gives us a highly structured object known as a "lattice-ordered group". Here is a snippet of the resulting lattice:


This ranking is similar to what philosophers often call "Dominance" - if everyone in population P is better off than everyone in population Q, then P is better than Q. However, this is somewhat stronger - it allows us to compare populations of different sizes, something that the traditional dominance criterion doesn't let us do.

Let's take a minute to think about what we've done. Using only the fact that individuals' lives can be ordered and the requirement that population ethics respects this ordering in a certain technical sense, we've derived a robust population ethics, about which we can prove many interesting things.

Getting to total utilitarianism


One obvious facet of the above ranking is that it's not total. For example, we don't know if "Very Good" is better than "Good, Good", i.e. if it's better to have welfare "spread out" across multiple people, or concentrated in one. This obviously prohibits us from claiming that we've derived total utilitarianism, because under that system we always know which is better.

However, we can still derive a form of total utilitarianism which is equivalent in a large set of scenarios. To do so, we need to use the idea of an embedding. This is merely a way of assigning each welfare level a number. Here is an example embedding:

  • Medium = 1
  • Good = 2
  • Very Good = 3

Here's that same ordering, except I've tagged each population with the total "utility" resulting from that embedding:


This is clearly not identical to total utilitarianism - "Very Good" has a higher total utility than "Medium, Medium" but we don't know which is better, for example.

However, this ranking never disagrees with total utilitarianism - there is never a case where P is better than Q yet P has less total utility than Q.

Due to a surprising theorem of Holder which I have discussed before, as long as we disallow "infinitely good" populations, there is always some embedding like this. Thus, we can say that:
Total utilitarianism is the moral "baseline". There might be circumstances where we are uncertain whether or not P is better than Q, but if we are certain, then it must be that P has greater total utility than Q.

An application


Here is one consequence of these results. Many people, including myself, have the intuition that inequality is bad. In fact, it is so bad that there are circumstances where increasing equality is good even if people are, on average, worse off.

If we accept the premises of this blog post, this intuition simply cannot be correct. If the inequitable society has greater total utility, it must be at least as good as the equitable one.

Concluding remarks


There are certain restrictions we want the "addition" of a person to a population to obey. It turns out that there is only one way to obey them: by using grade school addition, i.e. total utilitarianism.