When a disaster occurs, most of us have the intuition that we should help improve the lives of survivors. But very few of us feel an obligation to have more children to offset the population loss. (i.e. our intuitions line up with making "more people happy" instead of "more happy people".) This is a surprisingly difficult position to defend, but it reminds me of Brian Tomasik's joke:Should we make more happy people, or more people happy?^{1}

- Bob: "Ouch, my stomach hurts."
- Classical total utilitarian: "Don't worry! Wait while I create more happy people to make up for it."
- Average utilitarian: "Never fear! Let me create more people with only mild stomach aches to improve the average."
- Egalitarian: "I'm sorry to hear that. Here, let me give everyone else awful stomach aches too."
- ...
- Negative total utilitarian: "Here, take this medicine to make your stomach feel better."

### Limiting theorems

It turns out that population ethics has, to a certain extent, been "solved". This is a technical result, so uninterested readers can skip to the next section, but basically the various questions I discuss in this blog post are the**only questions remaining**. Specifically:

Let $\mathbf u = \left(u_1,u_2,\dots\right)$ be the utilities of people $1,2,\dots$ and similarly let $\mathbf u' = \left(u_1',u_2',\dots\right)$ be the utilities of a different population. Further, suppose we have a "reasonable" way of defining which of two populations is better. Then there is a "value function" $V$ such that population $\mathbf u$ is preferable to population $\mathbf u'$ if and only if $V(\mathbf u) > V(\mathbf u')$. Furthermore, $V$ has the form: $$V(\mathbf u)=f(n)\sum_{i=1}^{n}\left[ g(u_i)-g(c)\right]$$The three sections of the blog post concern:

- The concavity of $g$, which moderates our inequality aversion
- The value of $c$, which is known as the "critical level"
- And the form of $f$, which is the "number dampening"

^{2}

### Inequality

In the wake of the financial crisis, movements like Occupy Wall Street raised wealth inequality as a major political issue.

An intuition that underlies these concerns is that the worse off people are, the more important it is to help them. We might donate to a charity to help starving people eat, but not one which helps rich yuppies eat even fancier food. The formal way to model this is to state that one person's utility has diminishing returns to society's overall well-being (i.e. additional utility to that person benefits society less and less as they become better off).

*(As in the rest of this post, you can use the slider to modify the function and see how changing $g$ affects our ethical choices.)*

One way of visualizing the impact this has on our decisions about populations is to use an indifference curve. In the chart below, the x-axis represents the utility of person X and the y-axis the utility of person Y. Each line on the chart indicates a set of points for which we are *indifferent* - for example, the blue line includes the point (50,50) and the point (100,0) since if we don't believe that utility has diminishing returns we don't care about how utility is divided up between the populace. (50 + 50 = 100 + 0).

Inequality aversion is aconclusionof population ethics, not anassumption^{3}

### Interlude - The Representation of Populations

We've just shown a very non-trivial result: if $g$ is concave (meaning that increasing utility has diminishing returns), then we are inequality-averse. (Conversely, if $g$ were convex then we would be inequality-seeking, but I don't know of anyone who has argued this.) One problem we're going to run into soon is that there are too many variables to easily visualize. So I want to bring up a certain fact about population ethics:For any population $u$, there is a population $u'$ such that:For example, if we believed utility did not have diminishing returns, we would be indifferent between $(75,25)$ and $(50,50)$ because the total utility is the same. This means that:

- The number of people in $u$ and $u'$ are the same
- Everyone in $u'$ has the same utility as each other (i.e. $u'$ is "perfectly equitable")
- And we are indifferent between $u$ and $u'$

Any time we want to compare populations $p$ and $q$, we can instead compare $p'$ and $q'$ where both $p'$ and $q'$ are perfectly equitable (i.e. every person in $p'$ has the same utility as each other, and similarly for $q'$).A perfectly equitable population can be parameterized by exactly two variables: the number of people in the population, and the average utility. While there are theoretical implications of this, the most relevant fact for us is that it means we can keep using two-dimensional graphs.

### Critical Levels

Back to the topic at hand. The following assumption sounds very strange, but it's made quite frequently in the literature:Even if your life is worth livingMore formally:to youand you don't influence anyone else, that doesn't mean thepopulation as a wholebenefits from your existence. Specifically, your welfare must be greater than a certain amount, known as the "critical level", before your existence benefits society.^{4}

Value to society = utility - critical levelOr $$V(\mathbf u)=\sum_{i=1}^{n} \left(u_i - c\right)$$ where $c$ is the critical level. (Note that $c$ is a constant, and independent of $\mathbf u$.) I think this is best illustrated with an example. Suppose we have a constant amount of utility, and we're wondering how many people to divide it up between. (As mentioned earlier, this is a perfectly equitable population, so everyone gets an equal share.) Here's how changing the critical level changes our opinion of the optimal population size:

Positive critical levels give a "penalty" for every person who's alive, whereas negative critical levels give a "bonus"This is clear since $$V(\mathbf u)=\sum_{i=1}^{n} \left(u_i - c\right)=\left(\sum_{i=1}^{n} u_i\right)-nc$$ Here are indifference curves for different critical levels:

For any possible population of at least ten billion people, all with a very high quality of life, there must be some much larger imaginable population whose existence, if other things are equal, would be better even though its members have lives that are barely worth living.^{5}

In tabular form:

Population | Size | Average Utility | Total Value
(c=0) |
Total Value
(c = ) |
---|---|---|---|---|

A | 1,000 | 100 | 100,000 | 90,000 |

B | 10,000,000 | 0.1 | 1,000,000 | -99,000,000 |

C | 1,000 | -4 | -4,000 | -15,000 |

D | 100 | -1 | -100 | -1,100 |

Many people have the intuition that A is preferable to B. We can see that only by having a positive critical level can we make this intuition hold.

Unfortunately, we can also see that having a positive value of *c* results in what Arrhenius has called the "sadistic conclusion": We prefer population C to population B, even though everyone in C is suffering and the people in B have positive lives. And if *c* is negative we have another sort of sadistic conclusion: We prefer C to D even though there are fewer people suffering in D and no one is better off in C than they are in D.

Some people will bite the bullet and prefer the Sadistic Conclusion to the Repugnant one. But it's hard to make a case for this being the less intuitive of the two, meaning we must have a critical level of zero.

### Number Dampening

Canadian philosopher Thomas Hurka has argued for the two following points:- For small populations, we should care about
*total*welfare - For large populations, we should care about
*average*welfare

Independent of the question about whether people *should* care more about average welfare for large populations, it seems clear that in practice we do (as I've discussed before).

The way to formalize this is to introduce a function $f$:

$$V(\mathbf u)=f(n)\sum_{i=1}^{n}u_i$$ where $$f(n) = \left\{ \begin{array}{lr} 1 & : n \leq n_0 \\ n_0/n & : n > n_0 \end{array} \right.$$ If we have fewer than $n_0$ people (i.e. if the population is "small") then this is equivalent to total utilitarianism. If we have more (i.e. the population is "large") then it's equivalent to average utilitarianism. Graphically:Even with this modification though, it still seems pretty implausible that population size has diminishing returns. The relevant fact is that $\sqrt{x+y}\not=\sqrt{x}+\sqrt{y}$, so we can't just break populations apart.^{6} Therefore, we have to consider every single person who has ever lived (and who ever will live) before we can make ethical decisions. As an example of the odd behavior this "holistic" reasoning implies:

Some researchers are on the verge of discovering a cure for cancer. Just before completing their research, they learn that the population of humans 50,000 years ago was smaller than they thought. As a result, they drop their research to focus instead on having more children.

An example will explain why this is the correct behavior if you believe in number-dampening. Say we're using the value function

$$V(\mathbf u)=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}u_i$$and we can either move everyone alive from having 10 utils up to 10.1 (discovering cancer cure) or else add a new person with utility 100 (have a child). Which option is best depends on the population size:

Population size | Value of society w/ cancer cure | Value of society w/ new child |
---|---|---|

500 | $\frac{1}{\sqrt{500}}\left(500\cdot 10.1\right)=226$ | $\frac{1}{\sqrt{501}}\left(500\cdot 10 + 100\right)=228$ |

5,000 | $\frac{1}{\sqrt{5000}}\left(5000\cdot 10.1\right)=714$ | $\frac{1}{\sqrt{5001}}\left(5000\cdot 10 + 100\right)=708$ |

*Having a child is better if the population size is 500, but worse if the population size is 5,000.*

It goes against our intuition that the population size in the distant past should affect our decisions about what to do today. One simple way around this is to just declare that "population size" is the number of people *currently alive*, not the people who have ever lived. Nick Beckstead's thesis has an interesting response:

The Separated Worlds: There are only two planets with life. These planets are outside of each other’s light cones. On each planet, people live good lives. Relative to each of these planets’ reference frames, the planets exist at the same time. But relative to the reference frame of some comet traveling at a great speed (relative to the reference frame of the planets), one planet is created and destroyed before the other is created.

To make this exact, let's say each planet has 1,000 people each with utility level 100. Then we have:

Dampening Amount | Value on both planets | Value on comet |
---|---|---|

$1$ | $1$ | |

None | 200,000 | 200,000 |

How valuable a population is shouldn't change if you split it into arbitrary sub-populations, so it's hard to make the case for number dampening.

### Conclusion

I started off by claiming (without proof) that for any "reasonable" way of determining which population is better, we could equivalently use a value function $V$ such that population $\mathbf u$ is better than population $\mathbf u'$ if and only if $V(\mathbf u) > V(\mathbf u')$. Furthermore, I claimed $V$ must have the form: $$V(\mathbf u)=f(n)\sum_{i=1}^n\left[g(u_i)-g(c)\right]$$ In this post, we investigated modifying $f,g$ and $c$. However, we saw that having $c$ be anything but zero leads to a "sadistic conclusion", and having $f$ be non-constant leads to the "Separated Worlds" problem, meaning that we conclude $V$ must be of the form $$V(\mathbf u) = \sum_{i=1}^n g(u_i)$$ Where $g$ is a continuous, monotonically increasing function. This is basically classical (or total) utilitarianism, with perhaps some inequality aversion.

It's common to view ethicists as people who just talk all day without making any progress on the issues, and to some extent this reputation is deserved. But in the area of population ethics, I hope I've convinced you that philosophers have made tremendous progress, to the point that one major question (the form of the value function) has been almost completely solved.

### Footnotes

- I'm sure I didn't come up with this phrase, but I can't find who originally said it. I'd be much obliged to any commenters who can let me know.
- The obvious objection I'm ignoring here is the "person-affecting view", or "the slogan." I'm pretty skeptical of it, but it's worth pointing out that not all philosophers agree that population ethics must of this form.
- Of course, if we came to the conclusion that inequality is good, we might start questioning our assumptions, so this is perhaps not completely true.
- If the critical level is negative, then the converse holds (your life can suck but you'll still be a benefit to society). This is rarely argued.
- From Parfit's original Reasons and Persons
- This isn't just a problem with the square root - if $f(x+y)=f(x)+f(y)$ with $x,y\in\mathbb R$ then $f(x)=cx$ if $f$ is non-"pathological". (This is known as Cauchy's functional equation.)

The phrase is originally from Jan Narveson (p80 of this paper - https://www.jstor.org/stable/pdf/27902295.pdf).

ReplyDeleteI really enjoyed this, thanks. However the biggest missing part to me is a discussion of how the sum of all u_i (the total utility) might change over time. In practice this relates to questions like, "if Bill Gates has brought loads of net utility gain to the whole population, and if this was partly enabled by not taxing or regulating him too much (fighting inequality), should we allow some amount of inequality to ensure a high rate of net utility increase?" The premises here are controversial I know, I'm just pointing to the difficulty of discussing this question with the model described. Does the literature comment on this sort of thing? Or on changing net utility in general?

ReplyDelete