tag:blogger.com,1999:blog-61727242260087132642017-06-21T19:28:43.240-07:00P4PXodaraphttp://www.blogger.com/profile/00235189388350960670noreply@blogger.comBlogger66125tag:blogger.com,1999:blog-6172724226008713264.post-73303711073592782242014-09-21T09:05:00.001-07:002014-09-27T08:07:39.984-07:00Using Math to deal with Moral Uncertainty<script src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML" type="text/javascript">MathJax.Hub.Config({ tex2jax: {inlineMath: [['$','$'], ['\\(','\\)']] }, TeX: { equationNumbers: {autoNumber: "all"} } }); </script>There is a standard result that if a "rational" agent is uncertain about what the outcome of events will be, (i.e. they have to choose between two "lotteries") then they should maximize the expectation of some utility function. Formally, if we define a lottery as $L=\sum_i p_i O_i$ where $\{O_i\}$ are the outcomes and $\{p_i\}$ their associated probabilities, then for any "rational" preference ordering $\preceq$ there is a utility function $u$ such that<br />$$E\left[u(L)\right]\leq E\left[u(L')\right] \leftrightarrow L \preceq L'$$<br />Traditionally, this is used when people aren't certain about what the outcomes of their actions will be. However, I recently attended an interesting presentation by Brian Hedden where he discussed using this in cases of normative uncertainty, i.e. in cases when we know what the outcome of our actions will be, but we just don't know what the correct thing to value is.<br /><br />An analog to equation (1) in this case is to introduce ethical theories $T_1,\dots,T_n$ to which we might subscribe and $u_i(o)$ the value of an outcome $o$ under theory $T_i$ and then ask whether there is a utility function $u$ such that for $M(o) = \sum_i p(T_i)u_i(o)$ we have:<br />$$M(o)\leq M(o') \leftrightarrow o \preceq o'$$<br />Brian referred to this "meta-" theory as Maximize InterTheoretical Expectation or MITE. He <a href="http://users.ox.ac.uk/~sfop0432/MITE.pdf">believes that</a><br /><blockquote class="tr_bq">There are moral theories which it can be rational to take seriously, such that if you do take them seriously, MITE cannot say anything about what you super-subjectively ought to do, given your normative uncertainty.</blockquote>I show here that:<br /><ol><li>Contrary to Brian's argument, a MITE function always exists.</li><li>Furthermore, the output of this function is always just a vector of real numbers</li></ol><h3>Groups</h3>The basis of this post is the fact that we can generalize the above equation (2) to an arbitrary ordered group $G=(\Omega,+,\leq)$. Rather than bore the reader with a recitation of the group axioms, I will just point the reader to <a href="http://en.wikipedia.org/wiki/Group_(mathematics)">Wikipedia </a>and point out that the possibly questionable assumption here is existence of inverses (i.e. the claim that for any lottery $L$ there is a lottery $L'$ such that the agent is indifferent between participating in both lotteries and neither).<sup>1</sup><br /><sup><br /></sup>There are probably prettier ways of doing this, but here's a simple way of defining a group which is guaranteed to work. Let's say that:<br /><br /><ul><li>Each theory $T_i$ has some set of possible values $V_i$ and that we can find the (intratheoretic) value of an outcome via $u_i:\mathcal{O}\to V_i$. Crucially, we are not claiming that these values are in any way comparable to each other. ($u_i$ is guaranteed to exist because it could just be the identity function.)</li><li>$\Omega_i = \mathbb R \times V_i$ is a tuple which joins the probability of an outcome with its value. </li><li>$\Omega =\prod_i \Omega_i$ and that $\pi_i:\Omega_i\hookrightarrow \Omega$ is the canonical embedding (i.e. $\pi_i(\omega)$ is zero everywhere except it puts $\omega$ into the $i$th position).</li><li>$G=(\Omega, +)$ with addition being defined element wise </li></ul><br /><br /><b>Theorem 1</b>: For any partial order $\preceq\in \Omega\times \Omega$, $G$ satisfies (2).<br /><br /><b>Proof:</b> It's clear that<br />$$M(o)=\sum_i \pi_i \left(p(T_i), u_i(o)\right)$$<br />will just embed the information into G, which can easily inherit the order. Of course if we are really dedicated to the notation in (2) we can define $x\cdot y = \pi(x,y)$ and then get<br />$$M(o)=\sum_i p(T_i) \cdot u_i(o)$$<br />$\square$<br /><h3>So what?</h3>So far we've managed to show that you can redefine addition to mean whatever you want, and therefore utility functions will basically always exist. But it will turn out that we are actually dealing with some pretty standard groups here.<br /><br />First, a little commentary on terms. One of the major objections Brian raises is the notion of "options", i.e. the fact that in certain moral theories we have "optional" things and "required" things. For example we might say that donating to charities is optional but not murdering people is required. Furthermore, these types of goods bear a non-Archimedean relationship to each other – that is, no amount of donating to charity can offset a murder.<br /><br />For any ordered group $G$ there is a chain of subgroups $C_1\subset C_2\subset\dots\subset G$ such that each $C_i$ is "convex". Convex subgroups represents this notion of "optionality": $C_1$ represents all the "optional" things, $C_2$ is everything that is either required or optional, etc. Note that I am not assuming anything new here; it is a standard result that the set of all convex subgroups form a chain in any ordered group (see Glass, Lemma 3.2.1).<br /><br /><b>Theorem 2:</b> Our above group can be order-embedded into a subset of $\mathbb R ^n$ ordered lexically, i.e. we are just dealing with a set of vectors where each component of the vector is a real number. Furthermore, the number of components in the vector is identical to the number of "degrees" of optionality.<br /><b>Proof: </b>This is the <a href="http://en.wikipedia.org/wiki/Hahn_embedding_theorem">Hahn embedding theorem</a>. $\square$<br /><br /><b>Corollary:</b> if (and only if!) none of our theories that we give credence to have "optionality", then we are just dealing with the real numbers.<br /><br /><h3>Example</h3>The above was really abstract, so it's reasonable to ask for an example. But before I do that I would like to give a standard math joke:<br /><blockquote class="tr_bq"><i>(Prof. finishes proving Liouville's theorem that any bounded entire function is constant.)</i><br />Student: I'm not sure I really understand. Could you give an example?<br />Prof.: Sure. 7.<br /><i>(Prof. goes back to writing on the blackboard.)</i></blockquote>The joke here is that $f(x)=7$ is "obviously" a constant function whereas the student somehow wanted a more exotic example. But the professor had just proven that no such examples exist!<br /><br />So I will give some examples which the astute reader will point out are "obviously" instances of lexically ordered vectors of real numbers. This is because I have just proven that there are no other examples. Hopefully it will still be useful.<br /><br />First, let's discuss how just satisficing consequentialism by itself is a lexically ordered vector. Consider the decision criterion that $(x_1,x_2)\leq (y_1,y_2)$ if and only if $x_2< y_2$ or both $(x_2 = y_2)$ and $(x_1\leq y_1)$ (i.e. it is lexically ordered from the right). So we could for example represent giving a thousand dollars to charity as $(1000,0)$ and murdering someone as $(0,-10000)$; this gives us our desired result that no amount of donations can offset a murder (i.e. $(x,-10000)\prec(0,0)$ for all $x$). And of course this is a vector of real numbers which is lexically ordered, in accordance with our theorem.<br /><br />Now let's contrast this with standard utilitarianism, which would say that murdering someone could be offset by donating enough money to charity to prevent someone from dying. Let's call that amount $\$$10,000 (i.e. murdering someone has -10,000 utils). There are no "optional" things in standard utilitarianism, so we can write this as $(0,u)$ where $u$ is the utility of the outcome. In this case we have that $(0,x-10,000)\succ (0,0)$ if $x\geq 10,000$, i.e. donations greater than $\$$10,000 offset a murder.<br /><br />Now let's ask about the inter-theoretic uncertainty case. We have to choose between either doing nothing or murdering someone and donating $\$$15,000 to charity. We believe in satisficing consequentialism with probability $p$ and in standard utilitarianism with probability $1-p$. Therefore we have<br />$$\begin{align*}<br />p(15000,-10000) + (1-p)(0, 5000) & = (15000p,-10000p + 5000(1-p)) \\<br />& = (15000p,5000-15000p)<br />\end{align*}<br />$$ This is strongly preferred to the $(0,0)$ option if $p< 1/3$; if $p=1/3$ exactly then it is weakly preferred. <br /><br />This isn't the only way we can make inter-theoretic comparisons. I actually don't even think it's the best way. But is one example where we're using a lexically ordered vector of real numbers, and all other examples will be similar.<br /><br /><h3>A Counterexample</h3>It may be useful to construct a decision criterion which can't be represented using a MITE formula. (Obviously, it will have to disobey one of the ordered-group axioms due to theorem 1.)<br /><br />Here's one example:<br /><blockquote class="tr_bq">Let's say we represent an outcome having deontological value $d$ and utility $u$ as $(d,u)$ and we believe deontology with probability $p$. Then $(d_1,u_1)\preceq (d_2,u_2)$ if and only if $p(u_1\mod d_1)\leq p(u_2\mod d_2)$.</blockquote>This is not order-preserving because sometimes increasing utility is good but other times increasing utility is bad. So it doesn't make up an ordered group.<br /><br /><h3>Commentary</h3>Brian took as his definition of "rational" the standard von Neumann-Morgenstern axioms. This is of course a perfectly reasonable thing to do in general, but as he points out many individual moral theories fail these axioms. (Insert joke here about utilitarianism being the only "rational" moral system.)<br /><br />I personally find the idea of optionality pretty stupid and think it causes all sorts of problems even without needing to compare it to other theories. But if you do want to give it some credence, then a MITE formula will work fine for you.<br /><h3>Footnotes</h3><ol><li>Note that this also requires "modding out" by an indifference relation</li></ol>Xodaraphttp://www.blogger.com/profile/00235189388350960670noreply@blogger.com0tag:blogger.com,1999:blog-6172724226008713264.post-4758369060091442422014-09-13T16:25:00.000-07:002014-09-14T06:18:50.792-07:00Ridiculous math things which Ethics shouldn't depend on but doesThere is a scene in Gulliver's Travels where the protagonist calls up the ghosts of all the philosophers since Aristotle, and the ghosts all admit that Aristotle was way better than them at everything. Especially Descartes – Jonathan Swift wants to make very clear that Aristotle is a way better philosopher than Descartes, and that all of Descartes's ideas are stupid. (I think this was supposed to prove a point in some long-forgotten religious dispute.)<br /><br />If I ever become a prominent philosopher and we develop the technology to call up ghosts in order to win points in literary holy wars (I will let the reader decide which of those two conditions is more likely), please reincarnate me to talk ethics with Aristotle. Basically all the problems I'm worried about deal with mathematical concepts which weren't developed until around a century ago, and I'm excited to hear whether a <a href="http://en.wikipedia.org/wiki/Aristotelian_ethics">virtuous person</a> would accept Zorn's Lemma.<br /><br />Today I want to share two mathematical assumptions which are so esoteric that even most mathematicians don't bother worrying about them. Despite that, they actually critically influence what we think about ethics.<br /><h3>The Axiom of Choice</h3>The Axiom of Choice is everyone's favorite example of something which seems like an innocuous assumption but isn't. (The Axiom of Choice is the axiom of choice for such situations, if you will.) Here's Wikipedia's informal description:<br /><blockquote class="tr_bq"><span style="background-color: white; color: #252525; font-size: 14px; line-height: 22.3999996185303px;"><span style="font-family: inherit;">The axiom of choice says that given any collection of bins, each containing at least one object, it is possible to make a selection of exactly one object from each bin.</span></span></blockquote>Seems pretty reasonable right? Unfortunately, it leads to a series of paradoxes like that any ball <a href="http://en.wikipedia.org/wiki/Banach%E2%80%93Tarski_paradox">can be doubled</a> into two balls, both of which have the same size as the first.<br /><br />In many cases, a weaker assumption known as the "axiom of dependent choice" suffices and has the advantage of not leading to any (known) paradoxes. Sadly, this doesn't work for ethics.<br /><br />Consider the two following reasonable assumptions:<br /><br /><ol><li>Weak Pareto: if we can make someone better off and no one worse off, we should.</li><li>Intergenerational Equality: we should value the welfare of every generation equally.</li></ol><br /><b>Theorem </b>(proven by Zame): we cannot prove the existence of an ethical system which satisfies both Weak Pareto and Intergenerational Equality without using the axiom of choice (i.e. the axiom of dependent choice doesn't work).<br /><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-5vVL0_OXEEI/VBR-0h_VTNI/AAAAAAAAATQ/AIkb_AWuouw/s1600/kitty_tossup.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://3.bp.blogspot.com/-5vVL0_OXEEI/VBR-0h_VTNI/AAAAAAAAATQ/AIkb_AWuouw/s1600/kitty_tossup.jpg" height="320" width="263" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Sorry grandma, but unless you can make that ball double in size we're gonna have to start means-testing Medicare</td></tr></tbody></table><br /><h3>Hyperreal numbers</h3>The observant reader will note that the previous theorem showed only that we could prove the <i>existence </i>of a "good" ethical system if we use the axiom of choice, it didn't say anything about us actually being able to find it. To get that we have to enter the exciting world of hyperreal numbers!<br /><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-ju78L40cWZI/VBRphS6obII/AAAAAAAAAS0/Fxfh2BMrWR4/s1600/Founding-Fathers-I-5135.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://4.bp.blogspot.com/-ju78L40cWZI/VBRphS6obII/AAAAAAAAAS0/Fxfh2BMrWR4/s1600/Founding-Fathers-I-5135.jpg" height="262" width="320" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">The founding fathers weren't as impressed with Thomas Jefferson's original nonconstructive proof that the Bill of Rights could, in theory, be created</td></tr></tbody></table><br />I recently asked my girlfriend whether she would prefer:<br /><ol><li>Having one unit of happiness every day, for the rest of eternity, or</li><li>Having two units of happiness every day, for the rest of eternity</li></ol>She told me that the answer was obvious: she's a total utilitarian and in the first circumstance she would have one unit of happiness for an infinite amount of time, i.e. one infinity's worth of happiness. But in the second case she would have two units for an infinite amount of time, i.e. two infinities of happiness. And clearly two infinities are bigger than one.<br /><br />My guess is that how reasonable you think this statement is will depend in a U-shaped way on how much math you've learned:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-FeQlGJPhvi8/VBRr54jj82I/AAAAAAAAATA/uTzs_6MYDrw/s1600/Reasonableness.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-FeQlGJPhvi8/VBRr54jj82I/AAAAAAAAATA/uTzs_6MYDrw/s1600/Reasonableness.png" height="368" width="576" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div><br />To the average Joe, it's incredibly obvious that two infinities are bigger than one. More advanced readers will note that the above utility series don't converge, so it's not even meaningful to talk about one series being bigger than another. But those who've dealt with the bizarre world of <a href="http://en.wikipedia.org/wiki/Non-standard_analysis">nonstandard analysis</a> know that notions like "convergence" and "limit" are conspiracies propagated by high school calculus teachers to hide the truth about infinitesimals. In fact, there is a perfectly well-defined sense in which two infinities are bigger than one, and the number system which this gives rise to is known as the "hyperreal numbers."<br /><br />From an ethical standpoint, here are the relevant things you need to know:<br /><br /><b>Theorem</b> (proven by Basu and Mitra): if we use only our normal "real" numbers, then we can't construct an ethical system which obeys the above Weak Pareto and Intergenerational Equality assumptions.<br /><b>Theorem</b> (proven by Pivato): we <i>can </i>find such a system if we use the hyperreal numbers.<br /><br />To any TV producers reading this: the success of the hyperreal approach over the "standard calculus" approach would make me an excellent soft-news-show guest. While most stations can drum up some old crotchety guy complaining about how schools are corrupting the minds of today's youths, only I can actually <b>prove </b>that calculus teaches kids to be unethical.<br /><br /><h3>Conclusion / Apologies / Further Reading</h3><blockquote class="tr_bq"><span style="background-color: white;">As far as the laws of mathematics refer to reality, they are not certain; as far as they are certain, they do not refer to reality. - Einstein</span></blockquote>It goes without saying that I've heavily simplified the arguments I've cited, and any mistakes are mine. If you are interested in using logical reasoning to improve the world, then you should check out <a href="http://www.effective-altruism.com/">Effective Altruism</a>. If you are more of a "nonconstructive altruist" then you can do a Google scholar search for "sustainable development" or read the papers cited below to learn more.<br /><br />And most importantly: if you are student who is being punished for misbehaving in a calculus class, please 1) tell your teacher the Basu-Mitra-Pivato result about how calculus causes people to disrespect their elders and 2) film their reaction and put it on YouTube. (Now that's effective altruism!)<br /><br /><ul><li><span style="background-color: white; color: #222222; font-family: Arial, sans-serif; font-size: 13px; line-height: 16.1200008392334px;">Basu, Kaushik, and Tapan Mitra. "Aggregating infinite utility streams with intergenerational equity: the impossibility of being Paretian." </span><i style="background-color: white; color: #222222; font-family: Arial, sans-serif; font-size: 13px; line-height: 16.1200008392334px;">Econometrica</i><span style="background-color: white; color: #222222; font-family: Arial, sans-serif; font-size: 13px; line-height: 16.1200008392334px;"> 71.5 (2003): 1557-1563.</span></li><li><span style="background-color: white; color: #222222; font-family: Arial, sans-serif; font-size: 13px; line-height: 16.1200008392334px;">Pivato, Marcus. "Sustainable preferences via nondiscounted, hyperreal intergenerational welfare functions." (2008).</span></li><li><span style="background-color: white; color: #222222; font-family: Arial, sans-serif; font-size: 13px; line-height: 16.1200008392334px;">ZAME, WILLIAM R. "Can intergenerational equity be operationalized?."</span><i style="background-color: white; color: #222222; font-family: Arial, sans-serif; font-size: 13px; line-height: 16.1200008392334px;">Theoretical Economics</i><span style="background-color: white; color: #222222; font-family: Arial, sans-serif; font-size: 13px; line-height: 16.1200008392334px;"> 2 (2007): 187-202.</span></li></ul>Xodaraphttp://www.blogger.com/profile/00235189388350960670noreply@blogger.com2tag:blogger.com,1999:blog-6172724226008713264.post-36082586520653741122014-09-02T09:27:00.003-07:002015-09-28T10:57:55.651-07:00If you want to start a startup, go work for someone elseWhen you look online for advice about entrepreneurship, you will see a lot of "just do it": <br /><blockquote class="tr_bq"><span style="background-color: white;">The best way to get experience... is to start a startup. So, paradoxically, if you're too inexperienced to start a startup, what you should do is start one. That's a way more efficient cure for inexperience than a normal job. - Paul Graham, <a href="http://paulgraham.com/notnot.html">Why to Not Not Start a Startup</a></span></blockquote><blockquote class="tr_bq"><span style="background-color: white;">There is very little you will learn in your current job as a {consultant, lawyer, business person, economist, programmer} that will make you better at starting your own startup. Even if you work at someone else’s startup right now, the rate at which you are learning useful things is way lower than if you were just starting your own. - David Albert, <a href="http://dave.is/when.html">When should you start a startup?</a></span></blockquote>This advice almost never comes with citations to research or quantitative data, from which I have concluded:<br /><blockquote class="tr_bq">The sort of person who jumps in and gives advice to the masses without doing a lot of research first generally believes that you should jump in and do things without doing a lot of research first. </blockquote>As readers of this blog know, I don't believe in doing anything without doing a ton of research first, and have therefore come to the surprising conclusion that the best way to start a startup is by doing a lot of background research first.<br /><br />Specifically, I would make two claims:<br /><ol><li>It's unclear whether the average person learns anything from a startup.</li><li>It is clear that the average person learns something working in direct employment, and that they almost certainly will make more money working in direct employment (which can fund their later ventures).</li></ol>I think these two theoretical claims lead to one empirical one:<br /><blockquote class="tr_bq">If you want to start a successful startup, you should work in direct employment first.</blockquote><h3>Evidence</h3><div>Rather than boring you with a narrative, I will just present some choice quotes:</div><ul><li>"We found that among the 24 possible success factors identified in the literature, 8 are homogeneous significant success factors for NTVs [New technology ventures]: ... (6) founders' marketing experience; (7) founders' industry experience... 5 [other factors] were not significant: ... (2) founders' experience with start-ups" <a href="http://onlinelibrary.wiley.com/doi/10.1111/j.1540-5885.2007.00280.x/full">Success Factors in New Ventures: A Meta-analysis </a></li><li>"Human capital variables [measured by things like past startup experience] have limited impact on startup performance, and the few significant effects are split equally between enhancing and impeding performance."<a href="http://www.library.auckland.ac.nz/subject-guides/bus/docs/PickingWinners2004.pdf"> Picking winners or building them? Alliance, intellectual, and human capital as selection criteria in venture financing and performance of biotechnology startups. </a></li><li>"We find that a spell of self-employment is associated with lower hourly wages compared to workers who were consecutively wage-employed. We also show, however,that this effect disappears - and even becomes positive in some settings - for formerly self-employed who find dependent employment in the same sector as their self-employment sector." <a href="http://www.sciencedirect.com/science/article/pii/S0883902610000236">Is self-employment really a bad experience?: The effects of previous self-employment on subsequent wage-employment wages</a></li><li>Entrepreneurs don't seem to learn much from their failures: "first-time entrepreneurs have only a 18% chance of succeeding [i.e. have a successful exit] and entrepreneurs who previously failed have a 20% chance of succeeding." <a href="http://w4.stern.nyu.edu/finance/docs/pdfs/Seminars/063w-gompers.pdf">Skill vs. luck in entrepreneurship and venture capital: Evidence from serial entrepreneurs.</a></li><li>"Our most important finding is that the reward to the entrepreneurs who provide the ideas and long hours of hard work in these startups is zero in almost three quarters of [startups], and small on average once idiosyncratic risk is taken into consideration"- <a href="http://web.stanford.edu/~rehall/Hall-Woodward%20on%20entrepreneurship.pdf">The Burden of the Nondiversifiable Risk of Entrepreneurship</a></li></ul><div><h3>Even a stopped clock is right twice a day</h3>It's interesting to think about what exactly the "people don't learn anything from a startup" hypothesis would look like. If we take the above cited numbers of everyone having a 20% chance of succeeding in a given startup, then even if each success is independent most people will have succeeded at least once by their fourth venture.<br /><br />So the underlying message that many in the startup community say of "if you keep at it long enough, eventually you will succeed" is still completely true. I just think you could succeed quicker if you go work for someone else first.<br /><h3>But… Anecdata!</h3>I am sure that there are a lot of people who sucked on their first startup, learned a ton, and then crushed it on their second startup. But those people probably also would've sucked at their first year of direct employment, learned a ton, and then crushed it even more when they did start a company.<br /><br />There are probably people who learn better in a startup environment and you may be one of them, but the odds are against it.</div><div><h3>Attribution errors</h3></div><div>So if entrepreneurs don't learn anything in their startups, why do very smart people with a ton of experience like Paul Graham think they do? One explanation which has been advanced is the "Fundamental Attribution Error", which refers to "people's tendency to place an undue emphasis on internal characteristics to explain someone else's behavior in a given situation, rather than considering external factors." Wikipedia gives this example:<br /><blockquote class="tr_bq">Subjects read essays for and against Fidel Castro, and were asked to rate the pro-Castro attitudes of the writers. When the subjects believed that the writers freely chose the positions they took (for or against Castro), they naturally rated the people who spoke in favor of Castro as having a more positive attitude towards Castro. However, contradicting Jones and Harris' initial hypothesis, when the subjects were told that the writer's positions were determined by a coin toss, they still rated writers who spoke in favor of Castro as having, on average, a more positive attitude towards Castro than those who spoke against him. In other words, the subjects were unable to properly see the influence of the situational constraints placed upon the writers; they could not refrain from attributing sincere belief to the writers.</blockquote></div><div>Even in the extreme circumstance where people are explicitly told that an actor's performance is solely due to luck, they still believe that there must've been some internal characteristic involved. In the noisy world of startups where great ideas fail and bad ideas succeed it's no surprise that people greatly overestimate the effect of "skill". Baum and Silverman found that:<br /><blockquote class="tr_bq">VCs... appear to make a common attribution error overemphasizing startups’ human capital when making their investment decisions. - <a href="http://www.library.auckland.ac.nz/subject-guides/bus/docs/PickingWinners2004.pdf">Picking winners or building them? Alliance, intellectual, and human capital as selection criteria in venture financing and performance of biotechnology startups</a></blockquote>And if venture capitalists, who sole job consists of figuring out which startups will succeed, regularly make these errors then imagine how much worse it must be for the rest of us.<br /><br />(It also doesn't bode well for this essay – I'm sure that even after reading all the evidence I cited most readers will still attribute their startup heros' success to said heroes' skill, intelligence and perseverance.)<br /><h3>Conclusion</h3>I wrote this because I've become annoyed with the "just do it" mentality of so many entrepreneurs who spout some perversion of Lean Startup methods at me. Yes, doing experiments is awesome but learning from people who have already done those experiments is usually far more efficient. (Academics joke that "a month in the lab can save you an hour in the library.")<br /><br />If you just think a startup will be fun then by all means go ahead and start something from your dorm room. But if you really want to be successful then consider apprenticing yourself to someone else for a couple years first.<br /><br />(NB: I am the founder of a company which I started after eight years of direct employment.)<br /><h3>Works cited </h3></div><ul><li>Baum, Joel AC, and Brian S. Silverman. "Picking winners or building them? Alliance, intellectual, and human capital as selection criteria in venture financing and performance of biotechnology startups." Journal of business venturing 19.3 (2004): 411-436.</li><li>Gompers, Paul, et al. Skill vs. luck in entrepreneurship and venture capital: Evidence from serial entrepreneurs. No. w12592. National Bureau of Economic Research, 2006.</li><li>Kaiser, Ulrich, and Nikolaj Malchow-Møller. "Is self-employment really a bad experience?: The effects of previous self-employment on subsequent wage-employment wages." Journal of Business Venturing 26.5 (2011): 572-588.</li><li>Song, M., Podoynitsyna, K., Van Der Bij, H. and Halman, J. I. M. (2008), Success Factors in New Ventures: A Meta-analysis. Journal of Product Innovation Management, 25: 7–27. doi: 10.1111/j.1540-5885.2007.00280.x</li><li>Also see Pablo's comment below</li></ul><div><br /></div><br />Xodaraphttp://www.blogger.com/profile/00235189388350960670noreply@blogger.com9tag:blogger.com,1999:blog-6172724226008713264.post-86354269110912229522014-04-20T07:15:00.001-07:002014-04-20T07:15:23.375-07:00An Interactive Guide to Population Ethics<link rel="stylesheet" href="http://code.jquery.com/ui/1.10.4/themes/black-tie/jquery-ui.css" /><script src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML" type="text/javascript">MathJax.Hub.Config({ tex2jax: {inlineMath: [['$','$'], ['\\(','\\)']] }, TeX: { equationNumbers: {autoNumber: "all"} } }); </script> <script type="text/javascript" src="https://www.google.com/jsapi"></script><script type="text/javascript" src="http://code.jquery.com/jquery-2.1.0.min.js"></script><script type="text/javascript" src="http://code.jquery.com/ui/1.10.4/jquery-ui.min.js"></script><script src="//cdnjs.cloudflare.com/ajax/libs/numeral.js/1.4.5/numeral.min.js"></script> <script type="text/javascript">var diminishing_chart; var diminishing_indifference_chart; var critical_chart; var critical_indifference_chart; var dampening_chart; var dampening_diminishing_chart; google.load("visualization", "1", { packages: ["corechart"] }); function drawDiminishing(n) { var i, math, yVals, xVal; math = MathJax.Hub.getAllJax("diminishing_text")[0]; MathJax.Hub.Queue(["Text",math,'g(x) = \\sqrt[' + n + ']{x}']); yVals = []; yVals[0] = ['Utility', 'g(x) = x^(1/2)', 'g(x) = x^(1/' + n + ')']; for (i = 1; i < 10; i++) { xVal = (i - 1) * 10; yVals[i] = [xVal, Math.pow(xVal, 1 / 2), Math.pow(xVal, 1 / n)]; } var data = google.visualization.arrayToDataTable(yVals); var options = { 'title': 'Utility\'s Diminishing Returns', 'hAxis': { title: 'Utility' }, 'vAxis': { title: 'Value of Utility' }, 'animation': { duration: 600, easing: 'out' } }; diminishing_chart.draw(data, options); } function drawDiminishingIndifference(n){ var math = MathJax.Hub.getAllJax("diminishing_indifference_text")[0]; MathJax.Hub.Queue(["Text",math,'g(x) = \\sqrt[' + n + ']{x}']); var yVals = []; yVals[0] = ['Utility', 'g(x) = x', 'g(x) = x^(1/' + n + ')']; for (var i = 1; i < 22; i++) { var xVal = (i - 1) * 5; yVals[i] = [xVal, 100 - xVal, Math.pow(Math.pow(100, 1/n) - Math.pow(xVal, 1 / n), n)]; } var data = google.visualization.arrayToDataTable(yVals); var options = { 'title': 'Two-Person Indifference Curve', 'hAxis': { title: 'Person X\'s Utility' }, 'vAxis': { title: 'Person Y\'s Utility' }, 'animation': { duration: 600, easing: 'out' } }; diminishing_indifference_chart.draw(data, options); } function drawCritical(n) { $("#critical_text").text('c = ' + n); var yVals = []; yVals[0] = ['Utility', 'c = 0', 'c = 10', 'c = ' + n]; for (var i = 1; i < 10; i++) { yVals[i] = [i, 100, 110 - (10 * i), (100 + n) - (n * i)]; } var data = google.visualization.arrayToDataTable(yVals); var options = { 'title': 'Population Size vs. Value', 'hAxis': { title: 'Number of People' }, 'vAxis': { title: 'Value of Population' }, 'animation': { duration: 600, easing: 'out' } }; critical_chart.draw(data, options); } function drawCriticalIndifference(n){ $("#critical_indifference_text").text('c = ' + n); var yVals = []; yVals[0] = ['People', 'c = 0', 'c = 10', 'c = ' + n]; for (var i = 1; i < 11; i++) { yVals[i] = [i, 100 / i, (90 / i) + 10, ((100 - n) / i) + n]; } var data = google.visualization.arrayToDataTable(yVals); var options = { 'title': 'Critical-Level Indifference Curves', 'hAxis': { title: 'Population Size' }, 'vAxis': { title: 'Average Utility' }, 'animation': { duration: 600, easing: 'out' } }; critical_indifference_chart.draw(data, options); } function drawDampening(n) { $('#dampening_text').text('n = ' + n); var yVals = []; yVals[0] = ['Utility', 'V(u)']; for (var i = 1; i < 10; i++) { xVal = (i - 1) * 10; if(xVal <= n) { yVals[i] = [xVal, 10 * xVal]; } else { yVals[i] = [xVal, 10 * n]; } } var data = google.visualization.arrayToDataTable(yVals); var options = { 'title': 'Number-Dampened Value (Average Utility = 10)', 'hAxis': { title: 'Population Size' }, 'vAxis': { title: 'Value of Population' }, 'animation': { duration: 600, easing: 'out' } }; dampening_chart.draw(data, options); } function drawDampeningDiminishing(n) { var math = MathJax.Hub.getAllJax("dampening_diminishing_text")[0]; MathJax.Hub.Queue(["Text",math,'V(u)=\\frac{1}{\\sqrt[' + n + ']{n}}\\sum_{i=1}^{n}u_i']); var yVals = []; switch(n){ case 1: legLabel = 'Identity'; break; case 2: legLabel = 'Square Root'; break; case 3: legLabel = 'Cube Root'; break case 4: legLabel = 'Fourth Root'; break; case 5: legLabel = 'Fifth Root'; break; } yVals[0] = ['Utility', 'Square root', legLabel]; for (var i = 1; i < 10; i++) { xVal = (i - 1) * 10; yVals[i] = [xVal, Math.pow(xVal, 1/2) * 10, Math.pow(xVal, 1/n) * 10]; } var data = google.visualization.arrayToDataTable(yVals); var options = { 'title': 'Number-Dampened Value (Average Utility = 10)', 'hAxis': { title: 'Population Size' }, 'vAxis': { title: 'Value of Population' }, 'animation': { duration: 600, easing: 'out' } }; dampening_diminishing_chart.draw(data, options); } function drawAll() { drawDiminishing(3); $("#diminishing_slider").slider({ range: false, min: 1, max: 5, value: 3, slide: function(event, ui) { drawDiminishing(ui.value); } }); drawDiminishingIndifference(2); $("#diminishing_indifference_slider").slider({ range: false, min: 1, max: 5, value: 2, slide: function(event, ui) { drawDiminishingIndifference(ui.value); } }); drawCritical(-5); $("#critical_slider").slider({ range: false, min: -10, max: 10, value: -5, slide: function(event, ui) { drawCritical(ui.value); } }); drawCriticalIndifference(-5); $("#critical_indifference_slider").slider({ range: false, min: -10, max: 10, value: -5, slide: function(event, ui) { drawCriticalIndifference(ui.value); } }); $('#critical_level_select').change(function(){ $('#critical_table_val0').text( numeral(1000 * 100 - 1000 * $(this).val()).format('0,0') ); $('#critical_table_val1').text( numeral(10000000 * 0.1 - 10000000 * $(this).val()).format('0,0') ); $('#critical_table_val2').text( numeral(1000 * -4 - 1000 * $(this).val()).format('0,0') ); $('#critical_table_val3').text( numeral(100 * -1 - 100 * $(this).val()).format('0,0') ); }); drawDampening(50); $("#dampening_slider").slider({ range: false, min: 0, max: 70, value: 50, step: 10, slide: function(event, ui) { drawDampening(ui.value); } }); drawDampeningDiminishing(3); $("#dampening_diminishing_slider").slider({ range: false, min: 1, max: 5, value: 3, slide: function(event, ui) { drawDampeningDiminishing(ui.value); } }); $('#dampening_select').change(function(){ var root = $(this).val(); var math_planet = MathJax.Hub.getAllJax("planet_value")[0]; MathJax.Hub.Queue(["Text",math_planet,'\\sqrt[' + root + ']{2000}\\cdot 100 = ' + numeral(Math.pow(2000,1/root) * 100).format('0,0')]); var math_comet = MathJax.Hub.getAllJax("comet_value")[0]; MathJax.Hub.Queue(["Text",math_comet,'\\sqrt[' + root + ']{1000}\\cdot 100 + ' + '\\sqrt[' + root + ']{1000}\\cdot 100 = ' + numeral(2 * Math.pow(1000,1/root ) * 100).format('0,0') ]); }) .change(); } function chartSetup() { diminishing_chart = new google.visualization.LineChart(document.getElementById('diminishing_returns')); diminishing_indifference_chart = new google.visualization.LineChart(document.getElementById('diminishing_indifference')); critical_chart = new google.visualization.LineChart(document.getElementById('critical')); critical_indifference_chart = new google.visualization.LineChart(document.getElementById('critical_indifference')); dampening_chart = new google.visualization.LineChart(document.getElementById('dampening')); dampening_diminishing_chart = new google.visualization.LineChart(document.getElementById('dampening_diminishing')); drawAll(); } google.setOnLoadCallback(chartSetup); </script> <style>div.btw_slider, div.btw_slider_text { margin-left: auto; margin-right:auto; } div.btw_slider_text { font-style: italic; font-family: MathJax_Math; margin-top: 5px; width: 100px; } div.btw_slider { width: 200px; } </style>Population Ethics is the branch of philosophy which deals with questions involving - you guessed it - populations. Most of the problems that are solved by population ethics are things involving tradeoffs between quantity and quality of life. In bumper-sticker form, the question investigated in this post is: <blockquote><b>Should we make more happy people, or more people happy?</b><sup>1</sup></blockquote> When a disaster occurs, most of us have the intuition that we should help improve the lives of survivors. But very few of us feel an obligation to have more children to offset the population loss. (i.e. our intuitions line up with making "more people happy" instead of "more happy people".) This is a surprisingly difficult position to defend, but it reminds me of <a href="http://www.utilitarian-essays.com/">Brian Tomasik's</a> joke: <ul style="list-style-type: none"><li>Bob: "Ouch, my stomach hurts."</li><li> </li><li>Classical total utilitarian: "Don't worry! Wait while I create more happy people to make up for it."</li><li>Average utilitarian: "Never fear! Let me create more people with only mild stomach aches to improve the average."</li><li>Egalitarian: "I'm sorry to hear that. Here, let me give everyone else awful stomach aches too." <li>... <li>Negative total utilitarian: "Here, take this medicine to make your stomach feel better." </ul> <h3>Limiting theorems</h3>It turns out that population ethics has, to a certain extent, been "solved". This is a technical result, so uninterested readers can skip to the next section, but basically the various questions I discuss in this blog post are the <b>only questions remaining</b>. Specifically: <blockquote>Let $\mathbf u = \left(u_1,u_2,\dots\right)$ be the utilities of people $1,2,\dots$ and similarly let $\mathbf u' = \left(u_1',u_2',\dots\right)$ be the utilities of a different population. Further, suppose we have a "reasonable" way of defining which of two populations is better. Then there is a "value function" $V$ such that population $\mathbf u$ is preferable to population $\mathbf u'$ if and only if $V(\mathbf u) > V(\mathbf u')$. Furthermore, $V$ has the form: $$V(\mathbf u)=f(n)\sum_{i=1}^{n}\left[ g(u_i)-g(c)\right]$$ </blockquote>The three sections of the blog post concern: <ol><li>The concavity of $g$, which moderates our inequality aversion</li><li>The value of $c$, which is known as the "critical level"</li><li>And the form of $f$, which is the "number dampening"</li></ol>I hope to write a post soon on why these are the only three remaining questions, but interested readers can see <a href="http://www.ruf.rice.edu/~econ/papers/2000papers/06Bossert.pdf">(Blackorby, Bossert and Donaldson, 2000)</a> in the mean time.<sup>2</sup><h3>Inequality</h3><p>In the wake of the financial crisis, movements like Occupy Wall Street raised wealth inequality as a major political issue.</p> <div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-8ZJrCLDbTBQ/UzbZfGzQrSI/AAAAAAAAAQk/bsg_57JajPE/s1600/If-us-land-mass-were-distributed-like-us-wealth.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-8ZJrCLDbTBQ/UzbZfGzQrSI/AAAAAAAAAQk/bsg_57JajPE/s320/If-us-land-mass-were-distributed-like-us-wealth.png" /></a><br/><i>Wealth inequality in the US</i></div><p>An intuition that underlies these concerns is that the worse off people are, the more important it is to help them. We might donate to a charity to help starving people eat, but not one which helps rich yuppies eat even fancier food. The formal way to model this is to state that one person's utility has <a href="http://en.wikipedia.org/wiki/Diminishing_returns">diminishing returns</a> to society's overall well-being (i.e. additional utility to that person benefits society less and less as they become better off).</p> <div id="diminishing_returns" style="width: 600px; height: 250px;"></div><div id="diminishing_slider" class="btw_slider"></div><div id="diminishing_text" class="btw_slider_text">$g(x)=\sqrt{x}$</div><i>(As in the rest of this post, you can use the slider to modify the function and see how changing $g$ affects our ethical choices.)</i> <p>One way of visualizing the impact this has on our decisions about populations is to use an <a href="http://en.wikipedia.org/wiki/Indifference_curve">indifference curve</a>. In the chart below, the x-axis represents the utility of person X and the y-axis the utility of person Y. Each line on the chart indicates a set of points for which we are <i>indifferent</i> - for example, the blue line includes the point (50,50) and the point (100,0) since if we don't believe that utility has diminishing returns we don't care about how utility is divided up between the populace. (50 + 50 = 100 + 0).</p><div id="diminishing_indifference" style="width:600px; height: 450px;"></div><div id="diminishing_indifference_slider" class="btw_slider"></div><div id="diminishing_indifference_text" class="btw_slider_text">$g(x)=\sqrt{x}$</div>You can see that the stronger we think returns diminish, the more inequality-averse we become. For example, if $g(x)=\sqrt{x}$ we are indifferent between $(60,10)$ and $(100,0)$ since $\sqrt{60} + \sqrt{10}\approx \sqrt{100} + \sqrt{0}$, meaning that a 40-point increase in person X's welfare is needed to offset the 10-point loss in person Y's welfare, since Y's welfare is so low. This is an important point, so I'll call it out: <blockquote>Inequality aversion is a <i>conclusion</i> of population ethics, not an <i>assumption</i><sup>3</sup></blockquote><h3>Interlude - The Representation of Populations</h3>We've just shown a very non-trivial result: if $g$ is concave (meaning that increasing utility has diminishing returns), then we are inequality-averse. (Conversely, if $g$ were convex then we would be inequality-seeking, but I don't know of anyone who has argued this.) One problem we're going to run into soon is that there are too many variables to easily visualize. So I want to bring up a certain fact about population ethics: <blockquote>For any population $u$, there is a population $u'$ such that: <ol><li>The number of people in $u$ and $u'$ are the same</li><li>Everyone in $u'$ has the same utility as each other (i.e. $u'$ is "perfectly equitable")</li><li>And we are indifferent between $u$ and $u'$</li></ol></blockquote>For example, if we believed utility did not have diminishing returns, we would be indifferent between $(75,25)$ and $(50,50)$ because the total utility is the same. This means that: <blockquote>Any time we want to compare populations $p$ and $q$, we can instead compare $p'$ and $q'$ where both $p'$ and $q'$ are perfectly equitable (i.e. every person in $p'$ has the same utility as each other, and similarly for $q'$).</blockquote>A perfectly equitable population can be parameterized by exactly two variables: the number of people in the population, and the average utility. While there are theoretical implications of this, the most relevant fact for us is that it means we can keep using two-dimensional graphs. <h3>Critical Levels</h3>Back to the topic at hand. The following assumption sounds very strange, but it's made quite frequently in the literature: <blockquote>Even if your life is worth living <i>to you</i> and you don't influence anyone else, that doesn't mean the <i>population as a whole</i> benefits from your existence. Specifically, your welfare must be greater than a certain amount, known as the "critical level", before your existence benefits society.<sup>4</sup></blockquote>More formally: <blockquote>Value to society = utility - critical level</blockquote>Or $$V(\mathbf u)=\sum_{i=1}^{n} \left(u_i - c\right)$$ where $c$ is the critical level. (Note that $c$ is a constant, and independent of $\mathbf u$.) I think this is best illustrated with an example. Suppose we have a constant amount of utility, and we're wondering how many people to divide it up between. (As mentioned earlier, this is a perfectly equitable population, so everyone gets an equal share.) Here's how changing the critical level changes our opinion of the optimal population size: <div id="critical" style="width:600px; height: 450px;"></div><div id="critical_slider" class="btw_slider"></div><div id="critical_text" class="btw_slider_text">c=10</div>The impact of critical levels can be summarized as: <blockquote>Positive critical levels give a "penalty" for every person who's alive, whereas negative critical levels give a "bonus"</blockquote>This is clear since $$V(\mathbf u)=\sum_{i=1}^{n} \left(u_i - c\right)=\left(\sum_{i=1}^{n} u_i\right)-nc$$ Here are indifference curves for different critical levels: <div id="critical_indifference" style="width:600px; height: 450px;"></div><div id="critical_indifference_slider" class="btw_slider"></div><div id="critical_indifference_text" class="btw_slider_text">c=10</div>As the critical level gets lower, we are increasingly willing to decrease average utility in exchange for increasing the population size. The major motivation for having a positive critical level is that it avoids the <a href="https://en.wikipedia.org/wiki/Mere_addition_paradox">mere addition paradox</a> (sometimes known as the "Repugnant Conclusion"): <blockquote>For any possible population of at least ten billion people, all with a very high quality of life, there must be some much larger imaginable population whose existence, if other things are equal, would be better even though its members have lives that are barely worth living.<sup>5</sup></blockquote><p>In tabular form:</p><table><tr><th>Population</th><th>Size</th><th>Average Utility</th><th>Total Value <br />(c=0)</th><th>Total Value <br />(c = <select name="critical_level" id="critical_level_select"><option value="-10">-10</option><option value="-5">-5</option><option value="5">5</option><option value="10" selected="selected">10</option></select>) </th></tr><tr><td>A</td><td>1,000</td><td>100</td><td>100,000</td><td id="critical_table_val0">90,000</td></tr><tr><td>B</td><td>10,000,000</td><td>0.1</td><td>1,000,000</td><td id="critical_table_val1">-99,000,000</td></tr><tr><td>C</td><td>1,000</td><td>-4</td><td>-4,000</td><td id="critical_table_val2">-15,000</td></tr><tr><td>D</td><td>100</td><td>-1</td><td>-100</td><td id="critical_table_val3">-1,100</td></tr></table><p>Many people have the intuition that A is preferable to B. We can see that only by having a positive critical level can we make this intuition hold.</p> <p>Unfortunately, we can also see that having a positive value of <i>c</i> results in what Arrhenius <a href="http://people.su.se/~guarr/Texter/An%20Impossibility%20Theorem%20for%20Welfarist%20Axiologies%20in%20EP%202000.pdf">has called</a> the "sadistic conclusion": We prefer population C to population B, even though everyone in C is suffering and the people in B have positive lives. And if <i>c</i> is negative we have another sort of sadistic conclusion: We prefer C to D even though there are fewer people suffering in D and no one is better off in C than they are in D.</p> <p>Some people will bite the bullet and prefer the Sadistic Conclusion to the Repugnant one. But it's hard to make a case for this being the less intuitive of the two, meaning we must have a critical level of zero.</p><h3>Number Dampening</h3>Canadian philosopher Thomas Hurka <a href="http://www.repugnant-conclusion.com/hurka-populationsize.pdf">has argued</a> for the two following points: <ol><li>For small populations, we should care about <i>total</i> welfare</li><li>For large populations, we should care about <i>average</i> welfare</li></ol><p>Independent of the question about whether people <i>should</i> care more about average welfare for large populations, it seems clear that in practice we do (as I've <a href="http://philosophyforprogrammers.blogspot.com/2013/06/why-inequality-cant-matter.html">discussed before</a>).</P><p>The way to formalize this is to introduce a function $f$:</p>$$V(\mathbf u)=f(n)\sum_{i=1}^{n}u_i$$ where $$f(n) = \left\{ \begin{array}{lr} 1 & : n \leq n_0 \\ n_0/n & : n > n_0 \end{array} \right.$$ If we have fewer than $n_0$ people (i.e. if the population is "small") then this is equivalent to total utilitarianism. If we have more (i.e. the population is "large") then it's equivalent to average utilitarianism. Graphically: <div id="dampening" style="width:600px; height: 450px;"></div><div id="dampening_slider" class="btw_slider"></div><div id="dampening_text" class="btw_slider_text">n<sub>0</sub>=50</div>The non-differentiability at $n=n_0$ is pretty ridiculous though, so instead of a strict cutoff we could claim that there are diminishing returns to population size, just like we claimed that there are diminishing returns to utility in the first section. For example, we could state that $$V(\mathbf u)=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}u_i$$ This gives us a graph like: <div id="dampening_diminishing" style="width:600px; height: 450px;"></div><div id="dampening_diminishing_slider" class="btw_slider"></div><div id="dampening_diminishing_text" class="btw_slider_text">$V(\mathbf u)=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}u_i$</div> <p>Even with this modification though, it still seems pretty implausible that population size has diminishing returns. The relevant fact is that $\sqrt{x+y}\not=\sqrt{x}+\sqrt{y}$, so we can't just break populations apart.<sup>6</sup> Therefore, we have to consider every single person who has ever lived (and who ever will live) before we can make ethical decisions. As an example of the odd behavior this "holistic" reasoning implies:</p> <blockquote>Some researchers are on the verge of discovering a cure for cancer. Just before completing their research, they learn that the population of humans 50,000 years ago was smaller than they thought. As a result, they drop their research to focus instead on having more children.</blockquote> <p>An example will explain why this is the correct behavior if you believe in number-dampening. Say we're using the value function</p> $$V(\mathbf u)=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}u_i$$ <p>and we can either move everyone alive from having 10 utils up to 10.1 (discovering cancer cure) or else add a new person with utility 100 (have a child). Which option is best depends on the population size:</p> <table><tr><th>Population size</th><th>Value of society w/ cancer cure</th><th>Value of society w/ new child</th></tr><tr><td>500</td><td>$\frac{1}{\sqrt{500}}\left(500\cdot 10.1\right)=226$</td><td style="background-color:lightgreen">$\frac{1}{\sqrt{501}}\left(500\cdot 10 + 100\right)=228$</td></tr><tr><td>5,000</td><td style="background-color:lightgreen">$\frac{1}{\sqrt{5000}}\left(5000\cdot 10.1\right)=714$</td><td>$\frac{1}{\sqrt{5001}}\left(5000\cdot 10 + 100\right)=708$</td></tr></table> <p><center><i>Having a child is better if the population size is 500, but worse if the population size is 5,000.</i></center></p> <p>It goes against our intuition that the population size in the distant past should affect our decisions about what to do today. One simple way around this is to just declare that "population size" is the number of people <i>currently alive</i>, not the people who have ever lived. Nick Beckstead's <a href="https://docs.google.com/viewer?a=v&pid=sites&srcid=ZGVmYXVsdGRvbWFpbnxuYmVja3N0ZWFkfGd4OjExNDBjZTcwNjMxMzRmZGE">thesis</a> has an interesting response:</p> <blockquote><i>The Separated Worlds</i>: There are only two planets with life. These planets are outside of each other’s light cones. On each planet, people live good lives. Relative to each of these planets’ reference frames, the planets exist at the same time. But relative to the reference frame of some comet traveling at a great speed (relative to the reference frame of the planets), one planet is created and destroyed before the other is created.</blockquote> <p>To make this exact, let's say each planet has 1,000 people each with utility level 100. Then we have:</p><table><tr><th>Dampening Amount</th><th>Value on both planets</th><th>Value on comet</th></tr><tr><td><select name="dampening_select" id="dampening_select"><option value="1">None</option><option value="2" selected="selected">Square Root</option><option value="3">Cube Root</option><option value="4">Fourth Root</option></select></td><td id="planet_value">$1$</td><td id="comet_value">$1$</td></tr><tr><td>None</td><td>200,000</td><td>200,000</td></tr></table><p>How valuable a population is shouldn't change if you split it into arbitrary sub-populations, so it's hard to make the case for number dampening.</p> <div class="separator" style="clear: both; text-align: center;"><a href="http://www.xkcd.com/103/" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://imgs.xkcd.com/comics/moral_relativity.jpg" /></a></div><h3>Conclusion</h3><p>I started off by claiming (without proof) that for any "reasonable" way of determining which population is better, we could equivalently use a value function $V$ such that population $\mathbf u$ is better than population $\mathbf u'$ if and only if $V(\mathbf u) > V(\mathbf u')$. Furthermore, I claimed $V$ must have the form: $$V(\mathbf u)=f(n)\sum_{i=1}^n\left[g(u_i)-g(c)\right]$$ In this post, we investigated modifying $f,g$ and $c$. However, we saw that having $c$ be anything but zero leads to a "sadistic conclusion", and having $f$ be non-constant leads to the "Separated Worlds" problem, meaning that we conclude $V$ must be of the form $$V(\mathbf u) = \sum_{i=1}^n g(u_i)$$ Where $g$ is a continuous, monotonically increasing function. This is basically classical (or total) utilitarianism, with perhaps some inequality aversion.</p> <p>It's common to view ethicists as people who just talk all day without making any progress on the issues, and to some extent this reputation is deserved. But in the area of population ethics, I hope I've convinced you that philosophers have made tremendous progress, to the point that one major question (the form of the value function) has been almost completely solved.</p> <h3>Footnotes</h3><ol><li>I'm sure I didn't come up with this phrase, but I can't find who originally said it. I'd be much obliged to any commenters who can let me know.</li><li>The obvious objection I'm ignoring here is the "person-affecting view", or "the slogan." I'm pretty skeptical of it, but it's worth pointing out that not all philosophers agree that population ethics must of this form.</li><li>Of course, if we came to the conclusion that inequality is good, we might start questioning our assumptions, so this is perhaps not completely true.</li><li>If the critical level is negative, then the converse holds (your life can suck but you'll still be a benefit to society). This is rarely argued.</li><li>From Parfit's original <a href="http://www.amazon.com/Reasons-Persons-Oxford-Paperbacks-Parfit/dp/019824908X/">Reasons and Persons</a></li><li>This isn't just a problem with the square root - if $f(x+y)=f(x)+f(y)$ with $x,y\in\mathbb R$ then $f(x)=cx$ if $f$ is non-"pathological". (This is known as <a href="http://en.wikipedia.org/wiki/Cauchy%27s_functional_equation">Cauchy's functional equation</a>.) </ol> <h3>Similar Posts</h3><ol><li><a href="http://philosophyforprogrammers.blogspot.com/2013/12/an-improvement-to-impossibility-of.html">An Improvement to "The Impossibility of a Satisfactory Population Ethics"</a></li><li><a href="http://philosophyforprogrammers.blogspot.com/2013/06/why-inequality-cant-matter.html">Why Inequality Can't Matter</a></li></ol>Xodaraphttp://www.blogger.com/profile/00235189388350960670noreply@blogger.com0tag:blogger.com,1999:blog-6172724226008713264.post-40099074084523661512014-01-24T08:25:00.001-08:002014-01-24T08:25:22.986-08:00On my inability to improve decision making<i>Summary: It’s been suggested that improving decision making is an important thing for altruists to focus on, and there are a wide variety of computer programs which aim to improve clinician decision making ability. Since I <a href="http://www.jefftk.com/p/professional-philanthropy">earn to give</a> as a programmer making healthcare software, you might naively assume that some of the good I do is through improving clinician decision making. You would be wrong. I give an overview of the problem, and suggest that the problems which make improving medical decision making hard are general, and might suggest low-hanging fruit is rare in the field of decision support.</i><br /><br /><blockquote>Against stupidity the gods themselves contend in vain. - Friedrich Schiller</blockquote><br />In 1966, the Massachusetts General Hospital Utility Multi-Programming System (MUMPS) was created as one of the first healthcare information technology platforms. Running on the “cheap” ($70,000) PDP-7, it spread to become one of the most common pieces of infrastructure in healthcare - to this day, if you walk into your doctor’s office there’s a good chance some part of what you see has MUMPS in its stack.<br /><br />A few years later, researchers at Stanford using a computer with the approximate power of today’s wristwatches created MYCIN, a program capable of outperforming human physicians in diagnosing bacterial infections. Unlike MUMPS, such programs are still far from use in everyday care today: when I go to the doctor’s office I’m not diagnosed by computerized super-doctors but instead by the time-honored combination of human gut, skill and the occasional glance at a reference volume. Even “low-skill” jobs like calling patients to remind them about their appointments are still usually done by receptionists or temps with a printed call list; a process essentially indistinguishable from 50 years ago.<br /><br />If people are better at making decisions, then we will be better at a whole range of things, making decision-support technology an important priority for altruists. It was listed as one of 80,000 hours <a href="http://80000hours.org/blog/300-which-cause-is-most-effective--300">top priorities</a>, for example. I haven’t seen many empirical examinations of how decision-making technology (fails to) improve our abilities, so I offer healthcare IT as a case study.<br /><br /><h3>Different, not fewer, problems</h3>Clinicians sometimes order the wrong thing. Perhaps they forget the dosing and accidentally order 200 miligrams instead of 200 micrograms, or they order penicillin because they forgot that the patient’s allergic.<br /><br />It’s relatively easy to program a computer to warn the user when their prescription is off by an order of magnitude or contraindicates with an allergy, but it turns out that doctors are actually pretty good at what they do most of the time. If they order an unusually high dose, it’s probably because the patient has an unusually severe case. If they order a med that the patient is allergic to, it’s probably because they decided the benefits outweigh the risks. As a result, these warnings are almost always noise without a signal.<br /><br />The result is familiar to anyone who used the version of Microsoft Office with Clippy: clinicians slam on the keyboard to close all message boxes without bothering to read the warnings, completely negating any possible benefits. This “alert fatigue” (as it is politely termed) sometimes stems from organization’s fears of lawsuits keeping extraneous alerts around (<a href="http://171.67.114.118/content/20/2/377.abstract">Tiwari et al. 2013</a>), but even in trials which are done specifically to improve health and are judged successful enough to publish, less than a fourth have any impact on patient outcomes (<a href="http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3179735/%20">Hemens et al. 2011</a>).<br /><br /><h3>GIGO</h3>Anyone who’s done computer learning is aware of the maxim “garbage-in, garbage-out”. Even the most amazing prediction algorithm will give bad results if you give it bad input, and current medical algorithms are far from perfect.<br /><br />Medical records are written of, by and for humans, and there is a large resistance to change. If your program requires someone with MD-equivalent skills to translate the patient’s free-text chart into a discrete dataset that the software could analyse, then why would you use it? You might as well just hire the doctor to do the diagnosis herself.<br /><br />This problem is largely what’s held back programs like MYCIN. While they work great if your research grant provides for a grad student sweatshop to code data into your specialized format, it doesn’t work so well in the real world.<br />Doctor-Hardness<br />To summarize these two problems: people had originally thought they could slice off just a tiny piece of clinicians’ jobs and improve that without worrying about the rest. But it turned out that in order to do well in this tiny slice they needed to essentially replicate all of what a doctor does - in computer science terms, these problems are “doctor-hard”.<br /><br /><h3>Cost</h3>What have we spent to get these minimal benefits?<br /><br />The NIH’s Biomedical Information Science and Technology initiative <a href="http://www.biomedicalcomputationreview.org/6/2/9.pdf">has funded</a> about $350 million dollars worth of research (not all of it in clinical decision support), but this amount pales to to what governments have spent in getting IT into the hands of front-line physicians. <br /><br />The <a href="http://en.wikipedia.org/wiki/HITECH_ACT">HITECH Act</a> (part of the 2009 US stimulus bill) is expected to spend about $35 billion on increasing the adoption of electronic medical records. On the other side of the pond, the NHS’ <a href="http://en.wikipedia.org/wiki/Npfit">troubled IT program</a> ended up costing around <a href="http://www.telegraph.co.uk/news/uknews/1473927/Bill-for-hi-tech-NHS-soars-to-20-billion.html">£20 billion</a>, up a mere order of magnitude from the original £2.3 billion estimate.<br /><br />An explicit cost-benefit analysis of decision support research would require a lot more careful analysis of these expenditures, but my goal is just to point out that the lack of results is not due to lack of trying. Decades of work and billions of dollars have been spent in this area.<br /><br /><h3>Efficiency</h3>In retrospect, I think one argument we could have used to predict the non-cost-effectiveness of these interventions is to ask why they haven’t already been invented. The pre-computer medical world is filled with checklists, and so if there was an easy way to detect mistyped prescriptions or diagnose bacterial infections, it would probably already be used.<br /><br />This is to make a sort of “efficiency” argument - if there is some easy way to improve decision making, it’s probably already been implemented. So when we’re examining proposed decision support techniques, we might want to ask why it hasn’t already been done. If we can’t pin it on a new disruptive technology or something similar, we might want be skeptical that the problem is really so easy to solve.<br /><br /><h3>Acknowledgements</h3><a href="http://www.utilitarian-essays.com/">Brian Tomasik</a> proofread an earlier version of this post.<br /><br /><h3>Works Cited</h3>Ash, Joan S., Marc Berg, and Enrico Coiera. "Some unintended consequences of information technology in health care: the nature of patient care information system-related errors." Journal of the American Medical Informatics Association 11.2 (2004): 104-112. http://171.67.114.118/content/11/2/104.full <br /><br />Hemens, Brian J., et al. "Computerized clinical decision support systems for drug prescribing and management: a decision-maker-researcher partnership systematic review." Implement Sci 6.1 (2011): 89. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3179735/ <br /><br />Reckmann, Margaret H., et al. "Does computerized provider order entry reduce prescribing errors for hospital inpatients? A systematic review." Journal of the American Medical Informatics Association 16.5 (2009): 613-623. <br /><br />Tiwari, Ruchi, et al. "Enhancements in healthcare information technology systems: customizing vendor-supplied clinical decision support for a high-risk patient population." Journal of the American Medical Informatics Association20.2 (2013): 377-380. http://171.67.114.118/content/20/2/377.abstract <br /><br />Williams, D. J. P. "Medication errors." JOURNAL-ROYAL COLLEGE OF PHYSICIANS OF EDINBURGH 37.4 (2007): 343. http://www.rcpe.ac.uk/journal/issue/journal_37_4/Williams.pdf Xodaraphttp://www.blogger.com/profile/00235189388350960670noreply@blogger.com0tag:blogger.com,1999:blog-6172724226008713264.post-73840668003782209652014-01-18T15:37:00.000-08:002014-01-18T15:37:04.792-08:00Why Charities Might Differ in Effectiveness by Many Orders of Magnitude<script src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML" type="text/javascript"> MathJax.Hub.Config({ tex2jax: {inlineMath: [['$','$'], ['\\(','\\)']] }, TeX: { equationNumbers: {autoNumber: "all"} } }); </script> <br /><center><i>Summary:</i> Brian has <a href="http://utilitarian-essays.com/why-charities-do-not-differ-astronomically.html">recently argued</a> that because "flow-through" (second-order) effects are so uncertain, charities don't (on expectation) differ in their effectiveness by more than a couple orders of magnitude. I give some arguments here about why that might be wrong.</center><br /><h3>1. Why does anything differ by many orders of magnitude?</h3>Some cities are very big. Some are very small. This fact has probably never bothered you before. But when you look at how cities sizes stack up, it looks somewhat peculiar:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-NiC379xfEHg/UthyfuNeqsI/AAAAAAAAAPo/K5l0TjGEgqg/s1600/master.img-001.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-NiC379xfEHg/UthyfuNeqsI/AAAAAAAAAPo/K5l0TjGEgqg/s320/master.img-001.jpg" /></a></div><center><i>Taken from <a href="http://www.krutikoff.narod.ru/Activities/NSS2011/Eeckhout2004aer.pdf">Gibrat's Law for (All) Cities</a>, Eeckhaut 2004.</i></center><br />The X-axis is the size of the city, in (natural) logarithmic scale. The Y-axis corresponds to the density (fraction) of cities with that population. The peak is around the mark of 8 on the X-axis, which corresponds to $e^8\approx 3,000$ people. <br /><br />You can see that the empirical sizes of cities almost perfectly matches a normal ("bell curve") distribution. What's the explanation for this? Is mayoral talent distributed exponentially? When deciding to move to a new city do people first take the log of the new city's size and then roll some normally-distributed dice?<br /><br />It turns out that this is solely due to dumb luck and mathematical inevitability.<br /><hr />Suppose every city grows by a random amount each year. One year, it will grow 10%, the next 5%, the year after it will shrink by 2%. After these three years, the total change in population is<br />$$1.10\cdot 1.05\cdot 0.98$$<br />As in the above graph, we take the log<br />$$\log\left(1.10\cdot 1.05\cdot 0.98\right)$$<br />A property of logarithms you may remember is that $\log(a\cdot b)=\log a + \log b$. Rewriting (2) with this property gives<br />$$\log 1.10+ \log 1.05+\log 0.98$$<br />The <a href="http://en.wikipedia.org/wiki/Central_limit_theorem">central limit theorem</a> tells us that when you add a bunch of random things together, you'll end up with a normal distribution. We're clearly adding a bunch of random things together here, so we end up with the bell curve we see above.<br /><h3>2. Why charities might differ by many orders of magnitude</h3>Some of Brian's points are about how even if a charity is good in one dimension, it's not necessarily good in others (performance is "independent"). The point of the above is to demonstrate that we don't need dependence to have widely varying impacts. We just need a structure where people's talents are randomly distributed, but critically their talents <i>have a multiplicative effect</i>.<br /><br />There are some talents which obviously cause a multiplier. A charity's ability to handle logistics ("reduce overhead") will multiply the effectiveness of everything else they do. Their ability to increase the "denominator" of their intervention (number of bednets distributed, number of leaflets handed out, etc.) is another. PR skills, fundraising etc. all plausibly have a multiplicative impact.<br /><br />More controversially, some <a href="http://reflectivedisequilibrium.blogspot.com/2013/12/what-proxies-to-use-for-flow-through.html">proxies for flow-through effects</a> might have a multiplicative impact. Scientific output is probably more valuable in times of peace than in times of war. GDP increases are probably better when there's a fair and just government, instead of the new wealth going to a few plutocrats. <br /><br />Here's a simulation of charities' effectiveness with 10 dimensions, each uniformly drawn from the range [0,10].<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-2jFnreV_HcQ/UtlR_vKaNzI/AAAAAAAAAP4/VXnjwkUwxKg/s1600/charity+density.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-2jFnreV_HcQ/UtlR_vKaNzI/AAAAAAAAAP4/VXnjwkUwxKg/s320/charity+density.png" /></a></div>The red line corresponds to Brian's scenario (where each dimension is independent) and as he describes effectiveness is very closely clustered around 50. But as the dimensions have more interactions, the effectiveness spreads out, until the purely multiplicative model (purple line) where charities differ by many orders of magnitude. <br /><br /><h3>3. Picking winners</h3>Say that impact is the product of measurable, direct impacts and unmeasurable flow-through effects. Algebraically: $I=DF$. By linearity of expectations<br />$$E[I]=E[DF]=E[D]E[F]$$<br />So if two charities differ by a factor of say 1,000 in their direct impact then their total impact would (on expectation) differ by 1,000 as well.<br /><br />This isn't a perfect model. But I do think that it's not always correct to model impacts as a sum of iid variables, and there is a plausible case to be made that not only do charities differ "astronomically" but we can expect those differences even with our limited knowledge.<br /><br /><b>Acknowledgements</b><br /><br />This post was obviously inspired by Brian, and I talked about it with Gina extensively. The log-normal proof is known as <a href="http://en.wikipedia.org/wiki/Gibrat%27s_law">Gibrat's Law</a> and is not due to me.<br />Xodaraphttp://www.blogger.com/profile/00235189388350960670noreply@blogger.com7tag:blogger.com,1999:blog-6172724226008713264.post-38606964924596088802013-12-30T05:48:00.001-08:002014-01-24T08:05:40.514-08:00Predictions of ACE's surveying results<script src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML" type="text/javascript">MathJax.Hub.Config({ tex2jax: {inlineMath: [['$','$'], ['\\(','\\)']] }, TeX: { equationNumbers: {autoNumber: "all"} } }); </script> <br />Carl Shulman is polling people about their predictions for the results of the upcoming <a href="http://www.animalcharityevaluators.org/research/interventions/leafleting/leafleting-outreach-study-fall-2013/">ACE study</a> to encourage less biased interpretations. Here are mine.<br /><br />Assuming control group follows the data in e.g. <a href="http://ije.oxfordjournals.org/content/31/1/78/T1.expansion.html">the Iowa Women's Health Study</a> they should eat 166g meat/day with sd 66g.<sup>1</sup> (For the rest of this post, I'm going to assume everything is normally distributed, even though I realize that's not completely true.)<br /><br />For mathematical ease, let's take our prior from the <a href="http://ccc.farmsanctuary.org/the-powerful-impact-of-college-leafleting-part-1/">farm sanctuary study</a> and say: 2% are now veg, and an additional 5% eat "a lot less" meat which I'll define as cutting in half. So the mean of this group is 159g (4.2% less) w/ sd 69g.<br /><br />I don't know what tests they will do, but let's look at a t-test because that's easiest. The test statistic here is:<br />$$t=\frac{166-159}{\sqrt{\frac{66}{N_1}+\frac{69}{N_2}}}$$<br />Let's assume 5% of those surveyed were in the intervention group. Solving for $N$ in<br />$$1.96=\frac{7}{\sqrt{\frac{66}{.95N}+\frac{69}{.05N}}}$$<br />we find $N\approx 350$, meaning that I expect the null hypothesis to be rejected at the usual $\alpha=.05$ if they collected at least 350 survey responses.<sup>2</sup> I'm leaning slightly towards it not being significant, but I'm not sure how much data they collected.<br /><br />Here's my estimate of their estimate (I can't do this analytically, so this is based on simulations):<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-PyYbskBQJbw/UsDD4MKTWBI/AAAAAAAAAPQ/fGEAHUMGvII/s1600/vegest.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-PyYbskBQJbw/UsDD4MKTWBI/AAAAAAAAAPQ/fGEAHUMGvII/s320/vegest.png" /></a></div>You can see that the expected outcome is the true difference of about 4 veg equivalents per 100 leaflets, but with such a small sample size there is a 25% chance that we'll find leafleted people were <i>less</i> likely to go veg.<br /><br />Here's how a 50% confidence interval might shake out:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-W43hxnUVJjc/UsDErpG3sTI/AAAAAAAAAPY/iRdYhMTWDEg/s1600/vegci.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-W43hxnUVJjc/UsDErpG3sTI/AAAAAAAAAPY/iRdYhMTWDEg/s400/vegci.png" /></a></div>The left graph is the bottom of the CI, the right one is the top.<br /><br /><h3>Putting Money where my Mouth Is</h3>The point of this is so that I don't retro-justify my beliefs, which is that meta-research in animal-related fields is the most effective thing. I have a lot of model uncertainty, but I would broadly endorse the conclusions of the above. The following represent ~2.5% probability events (each), which I will take as evidence I'm wrong.<br /><ul><li>If a 50% CI is exclusively above 9 veg equivalents per 100 leaflets, then I think its ability to attract people to veganism outweighs the knowledge we'd gain from more studies. Therefore, I pledge <b>$1,000</b> to VO or THL (or whatever top-ranked leafleting charity exists at the time).</li><li>If a 50% CI is exclusively below zero, then veg interventions in general are less useful than I thought. Therefore I pledge <b>$1,000</b> to MIRI (or another x-risk charity, if e.g. GiveWell Labs has a recommendation by then).</li></ul>I don't think my above model is completely correct, and I'm sure ACE will have a different parameterization, so I don't know that these are really the 5% tails, but I would consider either of them to be a surprising enough event that my current beliefs are probably wrong.<br /><br />I am open to friendly charity bets (if result is worse than X I give money to your charity, else you give to mine), if anyone else is interested.<br /><br /><b>Footnotes</b><br /><ol><li>I tried to use MLE to combine multiple analyses, but found that the standard deviation is > 10,000 g/day. It's a good thing ACE has professional statisticians on the job, because the data clearly is kind of complex.</li><li>I used $d.f.=\infty$</li></ol>Xodaraphttp://www.blogger.com/profile/00235189388350960670noreply@blogger.com4tag:blogger.com,1999:blog-6172724226008713264.post-67223900240540778402013-12-29T08:16:00.000-08:002013-12-29T08:16:13.466-08:00An Improvement to "The Impossibility of a Satisfactory Population Ethics"<script src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML" type="text/javascript">MathJax.Hub.Config({ tex2jax: {inlineMath: [['$','$'], ['\\(','\\)']] }, TeX: { equationNumbers: {autoNumber: "all"} } }); </script> <br />Gustaf Arrhenius has published a series of impossibility theorems involving ethics. His most recent is <a href="http://people.su.se/~guarr/Texter/The%20Impossibility%20of%20a%20Satisfactory%20Population%20Ethics%20in%20Descriptive%20and%20Normative%20Approaches%20to%20Human%20Behavior%202011.pdf">The Impossibility of a Satisfactory Population Ethics</a> which basically shows that several intuitive premises yield a stronger version of the <a href="http://plato.stanford.edu/entries/repugnant-conclusion/">repugnant conclusion</a>.<br /><br />If you know me, you know that I believe that modern ("abstract") algebra can help resolve problems in ethics. This is one example: using some basic algebra, we can get a stronger result than Arrhenius while using weaker axioms. <br /><br />This is a "standing on the shoulders of giants" type of result: mathematicians have had centuries to trim their axioms to the minimal required set, so once you're able to phrase your question in more standard notation you can quickly arrive at better conclusions. Similarly, the errors in Arrhenius' proof that I've noted in the footnotes are mostly errors of omission that many extremely smart people made, until others pointed out pathological cases where their assumptions were invalid.<br /><br /><h3>Assumptions</h3><br />We assume that it's possible to have lives that are worth living ("positive" welfare), lives not worth living ("negative" welfare) and ones on the margin ("neutral" welfare). Arrhenius doesn't specify what the relationship is between "positive" and "negative" welfare, but I think there's a very intuitive answer: they cancel each other out. Just as $(+1) + (-1) = 0$, a world with a person of $+1$ utility and one with $-1$ utility is equivalent to a world with people at the neutral level.<sup>1</sup><br /><br />We continue the analogy with addition by writing $Z=X+Y$ if $Z$ is the union of two populations $X$ and $Y$. Just as with normal addition, we assume that $X+Y$ is always defined<sup>2</sup> and that we can move parentheses around however we want, i.e. $(X+Y)+Z=X+(Y+Z)$. Lastly, I'm going to assume that the order in which you add people doesn't matter, i.e. $X+Y=Y+X$.<sup>3</sup> I will finish the analogy with addition by specifying that welfare is isomorphic to the integers.<sup>4</sup> <br /><br />(The above is just a long-winded way of saying that population ethics is isomorphic to the free abelian group on $\mathbb Z$.)<br /><br />Also, for simplicity, I will write $nX$ for $\underbrace{X+\dots+X}_{n\ times}$.<sup>5</sup><br /><br />Lastly, we need to define our ordering. I'll use the notation that $X\leq Y$ means "Population $X$ is morally worse than population $Y$" and require that $\leq$ is a <a href="http://en.wikipedia.org/wiki/Quasi-order">quasi-order</a>, i.e. $X\leq X$ and $X\leq Y, Y\leq Z$ implies that $X\leq Z$. Notably, this does not require us to believe that populations are totally ordered, i.e. there may be cases where we aren't sure which population is better.<br /><br />The major controversial assumption we need from Arrhenius is what he calls "non-elitism": for any $X,Y$ with $X-1>Y$ there is an $n>0$ such that for any population $D$ consisting of people with welfare levels between $X$ and $Y$: $(n+1)(X-1)+D\geq X+nY+D$. In less formal terms, this is basically saying that there are no "infinitely good" welfare levels.<br /><br /><h3>Claim</h3><br />We claim that any group following the above axioms results in:<br /><blockquote><i>The Very Repugnant Conclusion</i>: For any perfectly equal population<br />with very high positive welfare, and for any number of lives with very<br />negative welfare, there is a population consisting of the lives with negative welfare and lives with very low positive welfare which is better than the high welfare population, all things being equal.</blockquote><br /><h3>Unused Assumptions</h3><br />The following are assumptions Arrhenius makes which are unused. (Note: these are verbatim quotes from his paper, unlike the other assumptions.) <br /><br />(Exercise for the advanced reader: figure out which of these also follow from the assumptions we did use.)<br /><ol><li><i>The Egalitarian Dominance Condition</i>: If population A is a perfectly<br />equal population of the same size as population B, and every person in<br />A has higher welfare than every person in B, then A is better than B,<br />other things being equal.</li><li><i>The General Non-Extreme Priority Condition</i>: There is a number n<br />of lives such that for any population X, and any welfare level A, a<br />population consisting of the X-lives, n lives with very high welfare, and<br />one life with welfare A, is at least as good as a population consisting<br />of the X-lives, n lives with very low positive welfare, and one life with<br />welfare slightly above A, other things being equal.</li><li><i>The Weak Non-Sadism Condition:</i> There is a negative welfare level and<br />a number of lives at this level such that an addition of any number of<br />people with positive welfare is at least as good as an addition of the<br />lives with negative welfare, other things being equal.</li></ol><h3>Proof</h3><b>Lemma</b><br /><br />First we prove a lemma: what Arrhenius calls "Condition $\beta$" and what mathematicians would refer to as a proof that our group is <a href="http://en.wikipedia.org/wiki/Archimedean_property">Archimedean</a>. This means that for any $X,Y>0$ there is an $n$ such that $nX\geq Y$.<br /><br />Basically we just observe that the "non-elitism" condition makes a simple induction. Starting from the premise that $(n+1)(X-1)+D\geq X+nY+D$, let $Y, D=0$, giving us that $(n+1)(X-1)\geq X$, i.e. $X$ is Archimedean with respect to $X-1$. Continuing the induction we find that $X$ is Archimedean with respect to $X-k$, completing the proof.<sup>6,7</sup><br /><br /><b>Theorem</b><br /><br />First, let me give a formal definition of the "Very Repugnant Conclusion": For any high level of welfare $H$, low positive level of welfare $L$ and negative level of welfare $-N$ and population sizes $c_{H},c_{N}$ there is some $c_{L}$ such that $c_{L}\cdot L+c_{N}\cdot(-N)\geq c_{H}H$.<br /><br />To prove our claim: we know there is some $k_{1}$ such that<br />$$k_{1}\cdot L\geq c_{H}\cdot H\label{ref1}$$<br />because of our lemma. Because it's a group, we know that $(N+-N)+L=L$ and moreover $(c_{N}N+c_{N}\cdot-N)+L=L$. Substituting this into (1) yields <br />$$k_{1}\left[\left(c_{N}N+c_{N}\cdot-N\right)+L\right]\geq c_{H}H\label{ref2}$$<br />Expanding the left hand side of (2) we get <br />$$k_{1}c_{N}N+k_{1}c_{N}\cdot(-N)+k_{1}L\label{ref3}$$ <br />By our lemma there is some $k_{2}$ such that $k_{2}L+D\geq k_{1}c_{N}N+D$; letting $D=k_{1}c_{N}(-N)+k_{1}L$ and using transitivity we get that <br />$$k_{2}L+k_{1}c_{N}(-N)+k_{1}L\geq c_{H}H$$<br />Rewriting terms leaves us with <br />$$\left(k_{1}+k_{2}\right)L+k_{1}c_{N}(-N)\geq c_{H}H$$<br />or<br />$$c_L L+c_{N'}(-N)\geq c_{H}H$$<br />$\blacksquare$<br /> <br /><h3>Comments</h3><br />I don't know that this shorter proof is much more convincing than Arrhenius' - my guess is that the people who disagree with an assumption are those who take a "person-affecting" view or otherwise object to the entire premise of the theorem. I would though say that:<br /><ol><li>None of the math I've used is beyond the average high-school student. It's just making the "algebra can be about things other than numbers" leap which is hard.</li><li>While abstract algebraic notation can be intimidating, it's relevant to realize that using it makes you more concise. (To the extent that a 26-page paper can be rewritten into a two-page blog post.)</li><li>Because we can be more concise and use standard terminology, it shines a light on what is really the controversial assumption: Non-Elitism.</li><li>Similarly, because we use standard concepts it's easier to see missing assumptions (e.g. I didn't realize that Arrhenius was missing a closure axiom until I tried to cast it in group theory terms).</li></ol>Lastly, because I can't finish any post without mentioning <a href="http://philosophyforprogrammers.blogspot.com/2013/06/a-graphical-introduction-to-lattices.html">lattice theory</a>, I'll add that some of the errors in Arrhenius' paper occurred because lattices are such a natural structure that he assumed they exist even where they weren't shown to. Of course, if you involve lattices more you end up with <a href="http://philosophyforprogrammers.blogspot.com/2013/10/an-argument-for-total-utilitarianism.html">total utilitarianism</a>, giving more insight into why Arrhenius' result holds.<br /><br /><h3>Acknowledgements</h3><br />I would like to thank Prof. Arrhenius for the idea, and Nick Beckstead for talking about it with me.<br /><br /><h3>Footnotes</h3><ol><li>Formally, for each $X$ there is some $-X$ such that for all $Y$, $X+(-X)+Y=Y$.</li><li>This isn't an explicit assumption in Arrhenius, but it's implicitly assumed just about everywhere</li><li>This arguably is controversial so I'll point out that commutativity isn't really required, but since it keeps the proof a lot shorter and most people will accept it, I'll keep the assumption</li><li>Arrhenius "proves" that welfare is order-isomorphic to $\mathbb Z$ incorrectly, so I'll just assume it instead of attempting to derive it from others. If you prefer, you can take his "Discreteness" axiom, add in assumptions that welfare is totally ordered and has no least or greatest element and you'll get the same thing.</li><li>Which is just to say that since it's an abelian group it's also a $\mathbb Z$-module.</li><li>Nick Beckstead thought that some people might not like using the neutral level like this, so I'll point out that you can use an alternative proof at the expense of an additional axiom. If you assume non-sadism, then you can find that $X+nY\geq X$ and therefore transitively $(n+1)(X-1)\geq X$.</li><li>This is somewhat misleading: we've only shown that the group is archimedean for totally equitable populations. That's all we need though.</li></ol>Xodaraphttp://www.blogger.com/profile/00235189388350960670noreply@blogger.com2tag:blogger.com,1999:blog-6172724226008713264.post-57045030136423014242013-12-22T11:52:00.000-08:002013-12-22T13:42:26.583-08:00How Conscious is my Relationship?One of the most interesting theories of consciousness is <a href="https://en.wikipedia.org/wiki/Integrated_Information_Theory">Integrated Information Theory</a> (IIT), proposed by Giulio Tononi. One of its more radical claims is that consciousness is a spectrum, and that virtually everything in the universe from the smallest atom to the largest galaxy has at least some amount of consciousness.<br /><br />Whatever criticisms one can make of IIT, the fact that it allows you to sit down and calculate how conscious a system is represents a fundamental advance in psychology. Since people say that good communication is the most important part of a relationship, and since any information-bearing system's consciousness can be calculated with IIT, I thought it would be fun to calculate how conscious Gina and my's relationship is.<br /><br /><h3>A Crash Course on Information</h3><b>Entropy</b><br />The fundamental measure of information is surprise. The news could be filled with stories about how gravity remains constant, the sun rose from the east instead of the west and the moon continues to orbit the earth, but there is essentially zero surprise in these stories, and hence no information. If the moon were to escape earth's orbit we would all be shocked, and hence get a lot of information from this.<br /><br />Written words have information too. If I forget to type the last letter of this phras, you can probably still guess it, meaning that trailing 'e' carries little surprise/information. Claude Shannon, founder of information theory, did <a href="https://www.princeton.edu/~wbialek/rome/refs/shannon_51.pdf">precisely this experiment</a>, covering up parts of words and seeing how well one could guess the remainder. (English has around 1 bit of information per letter, for the record.)<br /><br />Whatever you're dealing with the important part to remember is that "surprise" is when a low-probability event occurs, and that "information" is proportional to "surprise". Systems which can be predicted very well in advance, such as whether the sun rises from the east or the west, have very low surprise on average. Those which cannot be predicted, such as the toss of a coin, have much more surprising outcomes. (Maximally surprising probability distributions are those where every event is equally likely.) The measure of how surprising a system is (and hence how much information the system has) was named <a href="http://en.wikipedia.org/wiki/Entropy_(information_theory)">Entropy</a> by Shannon based on von Neumann's <a href="http://www.eoht.info/page/Neumann-Shannon+anecdote">advice</a> that "no one knows what entropy really is, so in a debate you will always have the advantage".<br /><br /><b>Divergence</b><br />Someone who knows modern English will have a bit more surprise than usual upon reading Shakespeare - words starting with "th" will end in "ou" more often than one would expect, but overall it's not too bad. Chaucer's Canterbury tales one can struggle through with difficulty, and Caedmon (the oldest known English poem) is so unfamiliar the letters are essentially unpredictable:<br /><blockquote>nu scylun hergan hefaenricaes uard<br />metudæs maecti end his modgidanc<br />uerc uuldurfadur swe he uundra gihwaes<br />eci dryctin or astelidæ <br />- first four lines of Caedmon. Yes, this is considered "English".</blockquote>If we approximate the frequency of letters in Shakespeare based on our knowledge of modern English we won't get it too wrong (i.e. we won't frequently be surprised). But our approximation of Caedmon from modern English is horrific - we're surprised that 'u' is followed by 'u' in "uundra" and that 'd' is followed by 'æ' in "astelidæ". <br /><br />Since you can make a good estimate of letter's frequencies in Shakespeare based on modern English, that means Shakespearean English and modern English have a low <a href="http://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence">divergence</a>. The fact that we're so frequently described when reading Caedmon means that the probability distribution there is highly divergent from modern English.<br /><br /><h3>Consciousness</h3>Believe it or not, Entropy and Divergence are the tools we need to calculate a system's consciousness. Roughly, we want to approximate a system's behavior by assuming that its constituent parts behave independently. The worse that approximation is, the more "integrated" we say the system is. Knowing that, we can derive its Phi, the measure of its consciousness.<br /><br /><h3>Our Relationship as a Conscious Being</h3>Here is a completely unscientific measure of mine and Gina's behavior over the last day or so:<br /><a href="http://1.bp.blogspot.com/-DA06zorUCc4/UrdcLGF0-II/AAAAAAAAAPA/cBcIjVK-zac/s1600/probact.png" imageanchor="1" ><img border="0" src="http://1.bp.blogspot.com/-DA06zorUCc4/UrdcLGF0-II/AAAAAAAAAPA/cBcIjVK-zac/s320/probact.png" /></a><br />The <i>(i,j)</i> entry is the fraction of time that I was doing activity <i>i</i> and Gina was doing activity <i>j</i>. (The marginal distributions are written, appropriately enough, in the margins.)<br /><br />You can see that my entropy is 1.49 bits, while Gina (being the unpredictable radical she is) has 1.69 bits. This means that our lives are slightly less surprising than the result of two coin tosses (I can hear the tabloids knocking already).<br /><br />However, our behavior is highly integrated: like many couples in which one person is loud and the other is a light sleeper, we're awake at the same time, and our shared hatred of driving means we only travel to see friends as a pair. Here's how it would look if we didn't coordinate our actions (i.e. assuming independence):<br /><iframe width='500' height='160' frameborder='0' src='https://docs.google.com/spreadsheet/pub?key=0AvtlLK1_TSrcdFBXUmNsTGNsRE5SSHF5RW1mYmRJeXc&output=html&widget=true'></iframe><br />The divergence between these two distributions is our relationship's consciousness (Phi). Some not-terribly-interesting computations show that Phi = 1.49 bits.<br /><br />The Pauli exclusion principle tells us that electrons in the innermost shell have 1 bit of consciousness (i.e. Phi = 1), meaning that our relationship is about as sentient as the average helium atom. So if we do decide to break up, the murder of our relationship won't be much of a crime.<br /><br /><h3>Side Notes</h3>Obviously this is a little tongue-in-cheek, but one important thing you might wonder is why my decision to consider our relationship to have two components (me and Gina) is the correct one. Wouldn't it be better to assume that there are 200 billion elements (one for each neuron in our brains) or even 10<sup>28</sup> (one for each atom in our bodies)?<br /><br />The answer is that yes, that would be better (apart from the obvious computational difficulties). IIT says that consciousness occurs at the level of the system with the highest value of Phi, so if we performed the computation correctly, we would of course find that it's Gina and myself who are conscious, not our relationship, since we have higher values of Phi.<br /><br />(The commitment-phobic will notice a downside to this principle: if your relationship becomes so complex and integrated that its value of Phi exceeds your own, you and your partner would lose individual consciousness and become one joint entity!)<br /><br />I should also note that I've discussed IIT's description of the <i>quantity</i> of consciousness, but not its definition of <i>quality</i> of consciousness.<br /><br /><h3>Conclusion</h3>Our beliefs about consciousness are so contradictory it's impossible for any rigorous theory to support them all, and IIT does not disappoint on the "surprising conclusions" front. But some of its predictions have been confirmed by evidence (the areas of the brain with highest values of Phi are more linked to phenomenal consciousness, for example) and the fact that it can even <i>make</i> empirical predictions makes it an important step forward. I'll close with Tononi's description of how IIT changes our perspective on physics:<br /><blockquote>We are by now used to considering the universe as a vast empty space that contains enormous conglomerations of mass, charge, and energy—giant bright entities (where brightness reflects energy or mass) from planets to stars to galaxies. In this view (that is, in terms of mass, charge, or energy), each of us constitutes an extremely small, dim portion of what exists—indeed, hardly more than a speck of dust.<br /><br />However, if consciousness (i.e., integrated information) exists as a fundamental property, an equally valid view of the universe is this: a vast empty space that contains mostly nothing, and occasionally just specks of integrated information (Φ)—mere dust, indeed—even there where the mass-charge–energy perspective reveals huge conglomerates. On the other hand, one small corner of the known universe contains a remarkable concentration of extremely bright entities (where brightness reflects high Φ), orders of magnitude brighter than anything around them. Each bright “Φ-star” is the main complex of an individual human being (and most likely, of individual animals). I argue that such Φ-centric view is at least as valid as that of a universe dominated by mass, charge, and energy. In fact, it may be more valid, since to be highly conscious (to have high Φ) implies that there is something it is like to be you, whereas if you just have high mass, charge, or energy, there may be little or nothing it is like to be you. From this standpoint, it would seem that entities with high Φ <i>exist</i> in a stronger sense than entities of high mass.</blockquote><br /><h3>Acknowledgements</h3>The idea for this post came from Brian's essay on <a href="http://www.utilitarian-essays.com/suffering-subroutines.html">Suffering Subroutines</a>, and the basis for my description of IIT came from Tononi's <a href="http://www.biolbull.org/content/215/3/216.full">Consciousness as Integrated Information: a Provisional Manifesto</a>. Gina read an earlier draft of this post.Xodaraphttp://www.blogger.com/profile/00235189388350960670noreply@blogger.com2tag:blogger.com,1999:blog-6172724226008713264.post-1182139460466966512013-10-26T06:47:00.000-07:002013-10-27T09:51:23.789-07:00A Pure Math Argument for Total UtilitarianismAddition is a very special operation. Despite the wide variety of esoteric mathematical objects known to us today, none of them have the basic desirable properties of grade-school arithmetic.<br /><br />This fact was intuited by 19th century philosophers in the development of what we now call "total" utilitarianism. In this ethical system, we can assign each person a real number to indicate their welfare, and the value of an entire population is the sum of each individuals' welfare.<br /><br />Using modern mathematics, we can now prove the intuition of Mills and Bentham: because addition is so special, any ethical system which is in a certain technical sense "reasonable" is equivalent to total utilitarianism.<br /><br /><h3>What do we mean by ethics?</h3><br />The most basic premise is that we have some way of ordering individual lives. <br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-wpJ0NNJ-tfw/UmPhPh3W86I/AAAAAAAAANo/_Gx6ro2SXV0/s1600/rank1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-wpJ0NNJ-tfw/UmPhPh3W86I/AAAAAAAAANo/_Gx6ro2SXV0/s320/rank1.png" /></a></div><br />We don't need to say how much better some life is than another, we just need to be able to put them in order. We might have some uncertainty as to which of two lives is better:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-CsllI-7jgPQ/UmPhqDWWLxI/AAAAAAAAANw/mWaFHcZ0L9Y/s1600/rank+lattice.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-CsllI-7jgPQ/UmPhqDWWLxI/AAAAAAAAANw/mWaFHcZ0L9Y/s320/rank+lattice.png" /></a></div><br />In this case, we aren't certain if "Medium" or "Medium 2" is better. However, we know they're both better than "Bad" and worse than "Good".<br /><br />In the case when we always know which of two lives is better, we say that lives are <a href="http://en.wikipedia.org/wiki/Total_order">totally ordered</a>. If there is uncertainty, we say they are <a href="http://en.wikipedia.org/wiki/Lattice_order">lattice ordered</a>.<br /><br />In either case, we require that the ranking remain consistent when we add people to the population. Here we add a person of "Medium" utility to each population:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-cLDyw0KgWY8/UmPj1jQbKdI/AAAAAAAAAN8/pGtLNpco7n0/s1600/transform.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-cLDyw0KgWY8/UmPj1jQbKdI/AAAAAAAAAN8/pGtLNpco7n0/s320/transform.png" /></a></div><br />The ranking on the right side of the figure above is legitimate because it keeps the order - if some life X is worse than Y, then (X + Medium) is still worse than (Y + Medium). This ranking below for example would fail that:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-2EM3Y8W-N2U/UmPkoa38CUI/AAAAAAAAAOE/qITSPNOMYlU/s1600/transform+bad.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-2EM3Y8W-N2U/UmPkoa38CUI/AAAAAAAAAOE/qITSPNOMYlU/s320/transform+bad.png" /></a></div><br />This ranking is inconsistent because it sometimes says that "Bad" is worse than "Medium" and other times says "Bad" is better than "Medium". A basic principle of ethics is that rankings should be consistent, and so rankings like the latter are excluded.<br /><br /><h3>Increasing population size</h3><br />The most obvious way of defining an ethics of populations is to just take an ordering of individual lives and "glue them together" in an order-preserving way, like I did above. This generates what mathematicians would call the <a href="http://en.wikipedia.org/wiki/Free_group">free group</a>. (The only tricky part is that we need good and bad lives to "cancel out", something which I've talked about <a href="http://philosophyforprogrammers.blogspot.com/2013/04/algebra-and-ethics.html">before</a>.)<br /><br />It turns out that merely gluing populations together in this way gives us a highly structured object known as a "lattice-ordered group". Here is a snippet of the resulting lattice:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-DToBgndGZjE/UmP8n_QU3dI/AAAAAAAAAOU/w5O1KoptwjU/s1600/derived+ranking.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-DToBgndGZjE/UmP8n_QU3dI/AAAAAAAAAOU/w5O1KoptwjU/s320/derived+ranking.png" /></a></div><br />This ranking is similar to what philosophers often call "Dominance" - if everyone in population P is better off than everyone in population Q, then P is better than Q. However, this is somewhat stronger - it allows us to compare populations of different sizes, something that the traditional dominance criterion doesn't let us do.<br /><br />Let's take a minute to think about what we've done. Using only the fact that individuals' lives can be ordered and the requirement that population ethics respects this ordering in a certain technical sense, we've derived a robust population ethics, about which we can prove many interesting things.<br /><br /><h3>Getting to total utilitarianism</h3><br />One obvious facet of the above ranking is that it's not total. For example, we don't know if "Very Good" is better than "Good, Good", i.e. if it's better to have welfare "spread out" across multiple people, or concentrated in one. This obviously prohibits us from claiming that we've derived total utilitarianism, because under that system we always know which is better.<br /><br />However, we can still derive a form of total utilitarianism which is equivalent in a large set of scenarios. To do so, we need to use the idea of an <i>embedding</i>. This is merely a way of assigning each welfare level a number. Here is an example embedding:<br /><br /><ul><li>Medium = 1</li><li>Good = 2</li><li>Very Good = 3</li></ul><br />Here's that same ordering, except I've tagged each population with the total "utility" resulting from that embedding:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-hLg7StQL-fk/UmQQULV2L7I/AAAAAAAAAOk/JSRcN8j68Nw/s1600/pop+ordering+tagged.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="209" src="http://1.bp.blogspot.com/-hLg7StQL-fk/UmQQULV2L7I/AAAAAAAAAOk/JSRcN8j68Nw/s320/pop+ordering+tagged.png" width="320" /></a></div><br />This is clearly not identical to total utilitarianism - "Very Good" has a higher total utility than "Medium, Medium" but we don't know which is better, for example.<br /><br />However, this ranking <b>never disagrees</b> with total utilitarianism - there is never a case where P is better than Q yet P has less total utility than Q.<br /><br />Due to a surprising theorem of Holder which I have <a href="http://philosophyforprogrammers.blogspot.com/2013/04/why-classical-utilitarianism-is-only.html">discussed before</a>, as long as we disallow "infinitely good" populations, there is always some embedding like this. Thus, we can say that:<br /><blockquote class="tr_bq">Total utilitarianism is the moral "baseline". There might be circumstances where we are uncertain whether or not P is better than Q, but if we are certain, then it must be that P has greater total utility than Q.</blockquote><br /><h3>An application</h3><br />Here is one consequence of these results. Many people, including myself, have the intuition that inequality is bad. In fact, it is so bad that there are circumstances where increasing equality is good even if people are, on average, worse off.<br /><br />If we accept the premises of this blog post, this intuition simply cannot be correct. If the inequitable society has greater total utility, it must be at least as good as the equitable one.<br /><br /><h3>Concluding remarks</h3><br />There are certain restrictions we want the "addition" of a person to a population to obey. It turns out that there is only one way to obey them: by using grade school addition, i.e. total utilitarianism.Xodaraphttp://www.blogger.com/profile/00235189388350960670noreply@blogger.com0tag:blogger.com,1999:blog-6172724226008713264.post-90439254057235402682013-09-28T11:00:00.002-07:002013-09-28T11:00:46.622-07:00Double Your Effectiveness with a Bunny SuitI decided today that three leafletters were saturating the area, so after I ran out of my first stack I just started tallying the other two's success. One was in a bright blue bunny costume, and the other was more normally dressed.<br /><br />The bunny won (<i>p = .0008</i>).<br /><br /><table><tbody><tr><th></th><th>Accepted Leaflet</th><th>Declined Leaflet</th></tr><tr><th>Bunny Suit</th><td>20</td><td>11</td></tr><tr><th>No suit</th><td>18</td><td>47</td></tr></tbody></table><i>Contingency table of what fraction of people accepted a leaflet when offered.</i><br /><i><br /></i> <br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-Lme0v0kNth0/UkcY0N8QDNI/AAAAAAAAANA/KMIB9PT3e3c/s1600/bunny+leaflet.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="http://3.bp.blogspot.com/-Lme0v0kNth0/UkcY0N8QDNI/AAAAAAAAANA/KMIB9PT3e3c/s320/bunny+leaflet.jpg" width="236" /></a></div><div style="text-align: center;"><i>Who would you take a leaflet from? </i></div>Xodaraphttp://www.blogger.com/profile/00235189388350960670noreply@blogger.com2tag:blogger.com,1999:blog-6172724226008713264.post-87799811456286595432013-06-30T13:51:00.001-07:002013-06-30T13:51:24.173-07:00A Graphical Introduction to Lattices<script src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML" type="text/javascript">MathJax.Hub.Config({ tex2jax: {inlineMath: [['$','$'], ['\\(','\\)']]} }); </script> <br /><br />Here is my (extended) family tree:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-jxZPrdeIXEc/Uc8aV68XNEI/AAAAAAAAALU/ALPGY-lXOg4/s463/family+tree.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-jxZPrdeIXEc/Uc8aV68XNEI/AAAAAAAAALU/ALPGY-lXOg4/s463/family+tree.png" /></a></div><br />Everyone in the tree shares at least one common ancestor and at least one common descendant. This makes my family tree a <a href="http://en.wikipedia.org/wiki/Lattice_(order)">lattice</a>, an important mathematical structure. While lattices are often presented in abstract algebraic form, they have a simple graphical representation called a <a href="https://en.wikipedia.org/wiki/Hasse_diagrams">Hasse diagram</a>, which is similar to a family tree. <br /><br />Because most lattice theory assumes a strong background in algebra, I think the results are not as well known as they should be. I hope to give a sampling of some lattices here, and a hint of their power. <br /><br /><h3>What are Lattices?</h3><br />A lattice is a structure with two requirements:<br /><ol><li>Every two elements have a "least upper bound." In the example above, this is the "most recent common ancestor".</li><li>Every two elements have a "greatest lower bound." In the example above, this is the "oldest common descendant".</li></ol>Note that the bound of some elements can be themselves; e.g. the most recent common ancestor of me and my mother is my mother. <br /><br />Lattices are a natural way of describing <a href="http://en.wikipedia.org/wiki/Partial_order">partial orders</a>, i.e. cases where we sometimes know which element came "first", but sometimes don't. For example, because the most recent common ancestor of my mother and myself is my mother, we know who came "first" - my mother must be older. Because the least upper bound of my mother and my father is some third person, we don't know which one is older.<br /><br /><h3>Shopping Carts</h3><br />Here's an example of four different ways to fill your shopping cart:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-8HTpKnkWX4w/Uc8CP_phERI/AAAAAAAAAJs/oTEc10j7PmU/s344/shopping+carts.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-8HTpKnkWX4w/Uc8CP_phERI/AAAAAAAAAJs/oTEc10j7PmU/s344/shopping+carts.png" /></a></div><br />The lines between two sets indicates preference: one apple is better than nothing, but one apple and one banana is even better than one apple. (Note that the arrows aren't directed, because every relation has a dual [e.g. the "better than" relation has a dual relation "worse than]. So whether you read the graph top-to-bottom or bottom-to-top, it doesn't really matter. By convention, things on the bottom are "less than" things on the top.)<br /><br />Now, some people might prefer apples to bananas, and some might prefer bananas to apples, so we can't draw any lines between the "one apple" and the "one banana" situations. Nonetheless, we can still say that you prefer having both to just one, so this order is pretty universal.<br /><br />The least upper bound in this case is "the worst shopping cart which is still preferred or equal to both things" (doesn't quite roll of the tongue, does it?), and the greatest lower bound is "the best shopping cart which is still worse than or equal to both things". Because these two operations exist, this means that shopping carts (or rather the goods that could be in shopping carts) make up a lattice.<br /><br />A huge swath of economic and ethical problems deal with preferences which can be put into lattices like this, which makes lattice theory a powerful tool for solving these problems.<br /><br /><h3>Division</h3><br />This is a more classical "math" lattice:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-Kfe0cEjnhFM/Uc8JySG4_-I/AAAAAAAAAKM/kYqOVH_AKFU/s213/divisibility.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-Kfe0cEjnhFM/Uc8JySG4_-I/AAAAAAAAAKM/kYqOVH_AKFU/s213/divisibility.png" /></a></div><br />Here a line between two integers indicates that the lower one is a factor of the higher one. The least upper bound in this lattice is the <a href="http://en.wikipedia.org/wiki/Least_common_multiple">least common multiple</a> (lcm) and the greatest lower bound is the <a href="http://en.wikipedia.org/wiki/Greatest_common_divisor">greatest common divisor</a> (gcd, some people call this the "greatest common factor").<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-ZI1aG-iUrQU/Uc8MKHZLHGI/AAAAAAAAAKs/HCHrkW27ghM/s213/gcd.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-ZI1aG-iUrQU/Uc8MKHZLHGI/AAAAAAAAAKs/HCHrkW27ghM/s213/gcd.png" /></a><a href="http://1.bp.blogspot.com/-CWA4pnT14Fg/Uc8MLx9c-qI/AAAAAAAAAK0/B1CQmgWVwtM/s213/lcm.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-CWA4pnT14Fg/Uc8MLx9c-qI/AAAAAAAAAK0/B1CQmgWVwtM/s213/lcm.png" /></a></div><center><i>The greatest common divisor of 4 and 10 is 2, and the least common multiple of 2 and 3 is 6.</i></center><br />Again we don't have a total ordering - 2 isn't a factor of 3 or vice versa - but we can still say something about the order.<br /><br />An important set of questions about lattices deal with operations which don't change the lattice structure. For example, $k\cdot\gcd(x,y)=\gcd(kx,ky)$, so multiplying by an integer "preserves" this lattice. <br /><br /><a href="http://3.bp.blogspot.com/-piIQ7BLz73k/UdCYJUesXWI/AAAAAAAAAMM/dTKel6BFKjs/s524/lattice+mult.png" imageanchor="1" ><img border="0" src="http://3.bp.blogspot.com/-piIQ7BLz73k/UdCYJUesXWI/AAAAAAAAAMM/dTKel6BFKjs/s524/lattice+mult.png" /></a><br /><center><i>Multiplying the lattice by three still preserves the divisibility relation.</i></center><br />A lot of facts about gcd/lcm in integer lattices are true in all lattices; e.g. the fact that $x\cdot y=\gcd(x,y)\cdot \text{lcm}(x,y)$.<br /><br /><h3>Boolean Logic</h3>Here is the simplest example of a lattice you'll probably ever see:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-BWMy3w90lSs/Uc8SZXUf6tI/AAAAAAAAALE/AmX86w3Ipgc/s87/boolean.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-BWMy3w90lSs/Uc8SZXUf6tI/AAAAAAAAALE/AmX86w3Ipgc/s87/boolean.png" /></a></div><br />Suppose we describe this as saying "False is less than True". Then the operation AND becomes equivalent to the operation "min", and the operation OR becomes equivalent to the operation "max":<br /><ul><li>A AND B = min{A, B}</li><li>A OR B = max{A, B}</li></ul>Note that this holds true of more elaborate equations, e.g. A AND (B OR C) = min{A, max{B, C}}. In fact, even more complicated <a href="http://en.wikipedia.org/wiki/Boolean_algebra_(structure)">Boolean algebras</a> are lattices, so we can describe complex logical "gates" using the language of lattices.<br /><br /><h3>Everything is Addition</h3><br />I switch now from examples of lattices to a powerful theorem:<br /><blockquote>[Holder]: Every operation which preserves a lattice and doesn't use "incomparable" objects is equivalent to addition.<sup>1</sup></blockquote><br />The proof of this is fairly complicated, but there's a famous example which shows that multiplication is equivalent to addition: logarithms.<br /><br />The relevant fact about logarithms is that $\log(x\cdot y)=\log(x)+\log(y)$, meaning that the problem of multiplying $x$ and $y$ can be reduced to the problem of adding their logarithms. Older readers will remember that this trick was used by <a href="http://en.wikipedia.org/wiki/Slide_rule">slide rules</a> before there were electronic calculators.<br /><br />Holder's theorem shows that similar tricks exist for any lattice-preserving operation.<br /><br /><h3>Everything is a Set</h3><br />Consider our division lattice from before (I've cut off a few numbers for simplicity):<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-pJD5CHHEDuo/Uc86N8Oj0XI/AAAAAAAAAL8/iNpYmXzOP9A/s213/small+div+lattice.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-pJD5CHHEDuo/Uc86N8Oj0XI/AAAAAAAAAL8/iNpYmXzOP9A/s213/small+div+lattice.png" /></a></div>Now replace each number with the set of all its factors:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-SdHsxrEbzXc/Uc84mERzC5I/AAAAAAAAALs/mlVxRdW3QLo/s209/set+lattice+small.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-SdHsxrEbzXc/Uc84mERzC5I/AAAAAAAAALs/mlVxRdW3QLo/s209/set+lattice+small.png" /></a></div><br />We now have another lattice, where the relationship between each node is set inclusion. E.g. {2,1} is included in {4,2,1}, so there's a line between the two. You can see that we've made an equivalent lattice.<br /><br />This holds true more generally: any lattice is equivalent to another lattice where the relationship is set inclusion.<sup>2</sup><br /><br /><h3>Max and Min Revisited</h3><br />Consider the following statements from various areas of math:<br />$$\begin{eqnarray}<br />\max\{x,y\} & = & x & + & y & - & \min\{x,y\} &\text{ (Basic arithmetic)} \\<br />P(x\text{ OR } y) & = & P(x) & + & P(y) & - & P(x\text{ AND } y) & \text{ (Probability)} \\<br />I(x; y) & = & H(x) & + & H(y) & - & H(x,y) & \text{ (Information theory)} \\<br />\gcd(x,y) & = & x & \cdot & y & \div & \text{lcm}(x,y) & \text{ (Basic number theory)} \\<br />\end{eqnarray}$$When laid out like this, the similarities between these seemingly disconnected areas of math is obvious - these results all come from the basic lattice laws. It <a href="http://knuthlab.rit.albany.edu/papers/knuth-me07-final.pdf">turns out</a> that merely assuming a lattice-like structure for probability results in the sum, product and Bayes' rule of probability, giving an argument for the Bayesian interpretation of probability.<br /><br /><h3>Conclusion</h3><br />The problem with abstract algebraic results is that they require an abstract algebraic explanation. I hope I've managed to give you a taste of how lattices can be used, without requiring too much background knowledge.<br /><br />If you're interested in learning more: Most of what I know about lattices comes from Glass' <i>Partially Ordered Groups</i>, which is great if you're already familiar with group theory, but not so great otherwise. Rota's <a href="http://www.ams.org/notices/199711/comm-rota.pdf">The Many Lives of Lattice Theory</a> gives a more technical overview of lattices (as well as an overview of why everyone who doesn't like lattices is an idiot) and J.B. Nation has some good <a href="http://www.math.hawaii.edu/~jb/">notes on lattice theory</a>, both of which require slightly less background. Literature about specific uses of lattices, such as in <a href="http://profs.sci.univr.it/~giaco/paperi/lattices-for-CS.pdf">computer science</a> or <a href="http://www.amazon.com/Logic-Algebra-Dolciani-Mathematical-Expositions/dp/0883853272/ref=sr_1_3?ie=UTF8&qid=1372556419&sr=8-3">logic</a>, also exists.<br /><br /><b>Footnotes</b><br /><ol><li>Formally, every l-group with only trivial convex subgroups is l-isomorphic to a subgroup of the reals under addition. Holder technically proved this fact for ordered groups, not lattice-ordered groups, but it's an immediate consequence.</li><li>By "equivalent" I mean l-isomorphic.</li></ol>Xodaraphttp://www.blogger.com/profile/00235189388350960670noreply@blogger.com2tag:blogger.com,1999:blog-6172724226008713264.post-9290733466117659952013-06-13T05:34:00.000-07:002013-06-16T08:15:16.492-07:00Why Inequality Can't Matter<script src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML" type="text/javascript">MathJax.Hub.Config({ tex2jax: {inlineMath: [['$','$'], ['\\(','\\)']]} }); </script> <br /><br />A <a href="http://faculty.chicagobooth.edu/christopher.hsee/vita/Papers/LessIsBetter.pdf">famous experiment</a> of Hsee's asks people how much they would pay for two different sets of dishware:<br /><br /><table><tr><th></th><th>Set A</th><th>Set B</th></tr><tr><td>Dinner plates:</td><td> 8, all in good condition</td><td> 8, all in good condition</td></tr><tr><td>Soup/salad bowls:</td><td> 8, all in good condition</td><td> 8, all in good condition</td></tr><tr><td>Dessert plates:</td><td> 8, all in good condition</td><td> 8, all in good condition</td></tr><tr><td>Cups:</td><td> 8, 2 of them are broken</td></tr><tr><td>Saucers:</td><td> 8, 7 of them are broken</td></tr></table><br />Note that Set A is a <a href="http://en.wikipedia.org/wiki/Pareto_improvement">Pareto improvement</a> over Set B - it has everything in Set B and some additional items as well. Therefore, people should be willing to pay at least as much for A as they are for B.<br /><br />Nonetheless, people are willing to pay almost 50% more for B than for A. The explanation for this "less is better" result is that the "hard" question of finding the absolute value of the set is subconsciously replaced with the "easier" question of finding the relative value of each item in the set.<br /><br />A similar phenomenon occurs in population ethics. Consider two populations:<br /><br /><table><tr><th></th><th>Population A</th><th>Population B</th></tr><tr><td>Investment Bankers:</td><td>100, very well off</td><td>100, very well off</td></tr><tr><td>Secretaries:</td><td>100, moderately well off</td><td></td></tr></table><br />My guess is that Population A would raise more ire than Population B, even though A is a Pareto improvement over B. Suppose we require our population ethics to follow what is sometimes called "Dominance" or "Pareto Dominance":<br /><br /><blockquote>If Population A and Population B differ by only one person, and that person is better off in A than in B, then A is better than B.</blockquote><br />Note that this is a pretty weak condition: in real life, there will almost always be winners and losers to any policy change, so it's rare to be able to decide things based solely on the Pareto Dominance principle.<br /><br />Despite being a weak condition, it rules out population ethics that value equality, diversity etc.<br /><br />Consider an extreme example: we only care about inequality (as measured by say the <a href="http://en.wikipedia.org/wiki/Gini_index">Gini index</a>). In the example above, Population A had more inequality (higher Gini index) and so it would be worse. But A was a Pareto improvement over B, so a contradiction arises; hence, the Gini index can't be the way we compare populations.<br /><br />A more general version of this is true:<br /><br /><blockquote>Suppose $(G,+)$ is a population ethics that obeys the group axioms and Pareto Dominance. Let's say there is also some function $f$ whereby if $pop_a$ and $pop_b$ differ by only one person $\Delta$ then $pop_a > pop_b$ if and only if $\Delta > f(pop_b)$, i.e. $f$ defines the minimum welfare needed for a person to "improve" the total value of the population.<br /><br />Then $f$ is constant. Specifically, $f(x)=0$ for all $x$, where 0 is the identity of $G$.</blockquote><br />In some ways, this is not a very surprising result - it just says that whether your life is good is independent of whether my life is good. But it seems to contradict a lot of things we believe as a society. <br /><br /><b>Proof</b>: Arbitrarily choose some population $pop$ and consider $pop+f(pop)$, i.e. adding a person right on the "margin". There are two possibilities: $pop+f(pop) < pop$ (adding this person is a bad idea), or $pop+f(pop)=pop$ (adding the person doesn't matter). <br /><br />Suppose that $pop+f(pop) < pop$. We know that there is some element $0$ such that $pop+0=pop$. If $0 < f(pop)$ then $pop+f(pop)$ is a Pareto improvement over $pop+0$, so $pop+0 < pop+f(pop) < pop$, which is a contradiction because $pop+0 = pop$. If $0 > f(pop)$ then by the definition of $f$, $pop+0 > pop$, another contradiction. Therefore $0=f(pop)$, proving the theorem in the first case.<br /><br />Alternatively, suppose that $pop+f(pop)=pop$. This means that $f(pop)$ is an identity of $G$, and since identities in a group are unique, $f(pop)$ must be $0$.<br /><br />Since $pop$ was chosen arbitrarily, we have shown this is true for all populations. QED.Xodaraphttp://www.blogger.com/profile/00235189388350960670noreply@blogger.com2tag:blogger.com,1999:blog-6172724226008713264.post-39183786584864925002013-05-26T15:06:00.002-07:002013-05-26T15:06:22.393-07:00How to Create a Donor-Advised FundThere are <a href="http://www.givewell.org/charities">a lot</a> of charities. So many, in fact, that some would-be altruists are struck with the alliterative <a href="http://en.wikipedia.org/wiki/Analysis_paralysis">analysis paralysis</a> and end up not donating at all.<br /><br />A tax vehicle known as a "Donor-advised fund" (DAF) allows you to get the best of both worlds: you can donate to charity, with all the psychological and tax benefits that go along with that decision, while still holding off on your decision as to which charity is best.<br /><br />You can create your own DAF in about 15 minutes online. (Be sure to give it an awesome name like "The Jane Doe Fund for <a href="http://wiki.lesswrong.com/wiki/Paperclip_maximizer">Paperclip Maximization</a>" because how many chances do you have to name an organization after yourself?) Once created, you can contribute money when you feel like it and deduct those contributions from your taxes. Your contributions will sit in an account accruing interest until you decide to write a grant to a specific charity.<br /><br />There are <a href="http://80000hours.org/blog/10-delayed-gratification-choosing-when-to-donate">interesting questions</a> about when to put money in a DAF vs. donate directly, but if you are uncertain about the most effective charity, <i>especially if you're so uncertain that you might not donate at all</i>, you should create a DAF.<br /><h4>Creating Your DAF</h4><div>Most major investment organizations allow you to create a DAF, such as <a href="http://www.fidelitycharitable.org/">Fidelity</a>, <a href="http://www.schwabcharitable.org/public/charitable/home">Charles Schwab</a> and <a href="http://www.vanguardcharitable.org/">Vanguard</a> as well as most local community foundations (if you search for "community foundation" in a standard search engine, you should be able to find one near you). Be sure that you're able to invest in a no-load index fund (professional investors <a href="http://www.nerdwallet.com/blog/investing/2013/active-mutual-fund-managers-beat-market-index/">don't do better than chance</a>, so it's not worth paying them a management fee). The major DAF hosts all provide this, so the only real consideration is the management fee they charge. I've found them to be pretty similar, so I just chose Fidelity since my retirement account is already there.</div><div><br /></div><div>Put a few key details into their form and Voilà! You have your own fund!</div><h4>What if I don't have $5,000?</h4><div>The main reason why a DAF might not be right for you is that they require a minimum starting donation of $5,000. If you aren't able to put in $5,000 right away, look into other options that community foundations provide. For example, the foundation near me has an <a href="http://www.madisoncommunityfoundation.org/Page.aspx?pid=240">"Acorn fund"</a>, which allows you to donate smaller amounts of money over a longer period of time. </div><h4>Conclusion</h4><div>I consider myself to be more informed than average about the subject of charity effectiveness, but if you look at <a href="http://80000hours.org/members/ben-west">my 80,000 Hours profile</a> you can see I put all my donations into a DAF. Because even similar-sounding charities can vary in effectiveness by <a href="http://80000hours.org/blog/94-how-to-do-one-year-of-work-in-four-hours">orders of magnitude</a>, it's extremely important to think through your decisions. By using a DAF you can build the "habit" of altruism and take the tax advantages while still ensuring that your money goes to the most effective causes.</div>Xodaraphttp://www.blogger.com/profile/00235189388350960670noreply@blogger.com0tag:blogger.com,1999:blog-6172724226008713264.post-32065265502187815392013-04-27T09:15:00.001-07:002013-04-27T09:15:06.650-07:00Why Classical Utilitarianism is the only (Archimedean) Ethic<script src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML" type="text/javascript">MathJax.Hub.Config({ tex2jax: {inlineMath: [['$','$'], ['\\(','\\)']]} }); </script> <br /><br />Probably the most famous graph in ethics is this one of Parfit's:<br /><br /><a href="http://2.bp.blogspot.com/-kS-sE4-P9SQ/UXtdyz8PjxI/AAAAAAAAAIw/hIRXufWGeqw/s1600/parfit.png" imageanchor="1" ><img border="0" src="http://2.bp.blogspot.com/-kS-sE4-P9SQ/UXtdyz8PjxI/AAAAAAAAAIw/hIRXufWGeqw/s320/parfit.png"/></a><br /><br />He's constructing a series of worlds where each one has more people, but those people have a lower level of welfare. The question is whether the worlds are equivalent, i.e. whether it's equivalent to have a world with a huge number of barely happy people or a world with a small number of ecstatic individuals.<br /><br />Classical utilitarianism answers "Yes", but some recent attempts to avoid unpleasant results (such as the "repugnant conclusion") have argued "No". For example, <a href="http://www.colorado.edu/philosophy/PHIL4260/Parfit%20-%20%27Overpopulation%20and%20the%20Quality%20of%20Life%27.pdf">Parfit says</a>:<br /><blockquote>Suppose that I can choose between two futures. I could live for another 100 years, all of an extremely high quality. Call this the Century of Ecstasy. I could instead live for ever, with a life that would always be barely worth living. Though there would be nothing bad in this life, the only good things would be muzak and potatoes. Call this the Drab Eternity. I believe that, of these two, the Century of Ecstasy would give me a better future.</blockquote><br />The belief that the "Century of Ecstasy" is superior to the "Drab Eternity", no matter how long that eternity lasts, has been called "Non-Archimedean" <a href="http://people.su.se/~guarr/Texter/Superiority%20in%20Value%20PS%202005.pdf">by Arrhenius</a>, in reference to the <a href="http://en.wikipedia.org/wiki/Archimedean_property">Archimedean Property</a> of numbers, which says roughly that there are no "infinitely large" numbers.<sup>1</sup> Specifically, a group is Archimedean if for any $x$ and $y$ there is some $n$ such that $$\underbrace{x+x+\dots+x}_{\text{n times}}>y$$<br />The following remarkable fact is true:<br /><blockquote>Classical Utilitarianism is the only Archimedean ethic.</blockquote>This means that if we don't accept that the briefest instant of a "higher" pleasure is better than the longest eternity of a "lower" pleasure, then we must be classical utilitarians.<br /><h3>Proof</h3>First, define the terms. As always, we assume that there is some set $X$ which contains various welfare levels. There is an operation $\oplus$ which combines welfare levels; the statement $x\oplus y=z$ can be read as "A life with welfare $x$ and then welfare $y$ is equivalent to having a life with just welfare $z$."<sup>2</sup> It is assumed that this constitutes a group, i.e. the operation is associative and inverses and an identity exist.<br /><br />In order to make decisions, we need some ranking; the statement $x>y$ means "The welfare level $x$ is morally preferable to $y$." We require $>$ to agree with our operation, i.e. if $x>y$ then $x\oplus c > y\oplus c$ for all $c$.<br /><br />With the stipulation that our group is Archimedean, this reduces to a theorem of Hölder's, which states that all Archimedean <a href="http://en.wikipedia.org/wiki/Linearly_ordered_group">linearly ordered groups</a> are isomorphic to a subgroup of the reals under addition, i.e. classical utilitarianism. The proof is rather involved, but a fairly readable version can be found <a href="http://unapologetic.wordpress.com/2007/12/17/archimedean-groups-and-the-largest-archimedean-field/">here</a>.∎<br /><h3>Discussion</h3>In order to be useful, non-Archimedean theories can't just say that there is some theoretical amount of welfare which is lexically superior - this level of welfare must exist in our day-to-day lives. Personally, when comparing a brief second of happiness on my happiest day to years of moderate happiness, I would choose the years. This leaves me with no choice but to accept classical utilitarianism.<br /><br /><b>Footnotes</b><br /><ol><li>Ethics with this property have also been called "discontinuous" or having a "lexical" priority.</li><li>Unlike in past blogs where I used $\oplus$ to be a population ethic, here I define it in terms of intra-personal welfare to fit more in line with Parfit's quote.</li></ol>Xodaraphttp://www.blogger.com/profile/00235189388350960670noreply@blogger.com3tag:blogger.com,1999:blog-6172724226008713264.post-55296633867824537592013-04-14T19:42:00.000-07:002013-04-14T19:42:01.677-07:00Group Theory and the Repugnant Conclusion<script src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML" type="text/javascript">MathJax.Hub.Config({ tex2jax: {inlineMath: [['$','$'], ['\\(','\\)']]} }); </script> <br />A fundamental question in population ethics is the tradeoff between quantity and quality. The world has finite resources, so if we promote policies that increase the population, we do so at the risk of decreasing quality of life.<br /><br />Derek Parfit is credited with popularizing the importance of this problem when he pointed out that any population ethic which obeys some seemingly reasonable constraints must end up with what he called "the repugnant conclusion" - the conclusion that a world full of miserable people is better than a sparsely-populated world full of happy people. Since Parfit, there have been a range of theories seeking to preserve our intuitions about ethics while still avoiding this conclusion. <br /><br />One discovery of abstract algebra is that we can understand the limitations of systems based solely on the questions they are able to answer, even if we don't know what the answers are.<br /><br />Here, I'll consider any system capable of answering a question like "Are two people who each live 50 years morally equivalent to one person who lives 100 years?" (Again, we don't require that the answer be "Yes" or "No", but merely that there be <i>some</i> answer.) For notational ease, I use the symbol $\oplus$ to be the "moral combination", e.g. the above question can be written $$(50\text{ years})\oplus(50\text{ years})=100\text{ years?}$$ Such a system I will call a "moral group" and require that it obey a few <a href="http://en.wikipedia.org/wiki/Group_(mathematics)#Definition">standard requirements</a>. These are:<br /><br /><ol><li>Any two people can be replaced with one who is (significantly) better off</li><li>There is some level of welfare which is "morally neutral", i.e. a person of that welfare neither increases nor decreases the overall moral desirability of the world.</li><li>For any level of welfare, no matter how high, there is some level of welfare which is so negative that the two cancel out</li></ol><br />With this definition, we have an impossibility theorem:<br /><br /><div><b>Theorem</b>: In any "moral group", the repugnant conclusion holds.</div><div><br /></div><div><b>Proof</b>: Suppose that $x$ is a welfare level that is better than "barely worth living". Formally, say that there must be some $y$ where $0 < y < x$, i.e. it's possible to be worse off than $x$ and still have a "life worth living". We'll show that a world with just $x$ is morally equivalent to a world with two people who are both worse off than $x$. Repeating this ad infinitum leads to the conclusion that a world with a few happy people is equivalent to a world with a large number of people whose lives are "barely worth living."</div><br /><div>Choose some $y$ between $0$ and $x$ (one exists, by the definition of $x$). Note that $x=y\oplus z$ where $z=y^{-1}\oplus x$, so we just need to show that $z<x$. Since $y>0$, $y^{-1} < 0$ because if it weren't then we'd have $y^{-1} > 0$; adding $y$ to both sides results in $0>y$ which contradicts the assumption that $y>0$. Therefore $y^{-1} \oplus x < x$, or to write it another way: $z < x$. So $x=y\oplus z$, with $y$ and $z$ both worse than $x$.<br /><br />This means that for any world with people $x_1,x_2,\dots$ of high welfare, there is an equivalent world $y_1,y_2,\dots$ with more people, each of whom have lower welfare. By adding some person of low (but still positive) welfare $y_{n+1}$ to the second world, it becomes better than the first, resulting in the repugnant conclusion.∎</div><br />Xodaraphttp://www.blogger.com/profile/00235189388350960670noreply@blogger.com0tag:blogger.com,1999:blog-6172724226008713264.post-85495801431267178562013-04-07T07:54:00.000-07:002013-04-07T07:54:30.773-07:00Algebra and Ethics<script src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML" type="text/javascript">MathJax.Hub.Config({ tex2jax: {inlineMath: [['$','$'], ['\\(','\\)']]} }); </script> <br /><br />Symmetry is all around us. The kind of symmetry that most people think of is geometric symmetry, e.g. an equilateral triangle has rotational symmetry:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-HYiCR8U16Iw/UVbs2gzSWtI/AAAAAAAAAIE/pOWkw7Y-Wmw/s1600/rotational+symmetry.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="120" src="http://1.bp.blogspot.com/-HYiCR8U16Iw/UVbs2gzSWtI/AAAAAAAAAIE/pOWkw7Y-Wmw/s320/rotational+symmetry.png" width="320" /></a></div><br />I've rotated the triangle by 1/3 of a rotation, but it remains the "same", just with a "relabeling" of the points. Hence this rotation is a symmetry of the triangle.<br /><br />Ethical positions generally express another type of symmetry; when someone argues for "marriage equality" what they mean is that the gender of partners is merely a "relabeling" that keeps the important aspects like love and commitment the same. Symmetries in pain processing between humans and other animals has lead thinkers like Richard Dawkins to <a href="http://old.richarddawkins.net/articles/641957-but-can-they-suffer">declare</a> that species is merely a relabeling, and that causing pain to a cow is "morally equivalent" to causing pain to a human, calling our eating practices into question.<br /><br />In 1854 Arthur Cayley gave the first modern definition of what mathematicians call a "group", and <a href="http://en.wikipedia.org/wiki/Cayley%27s_theorem">showed</a> that groups are essentially permutations, thus establishing the theory of groups as the language of symmetry. Despite the importance of groups to symmetry and the importance of symmetry to ethics, I'm not able to find any ethical works based on group theory. So I hope to give what may be the first ever group-theoretical proof of ethics.<br /><br /><b>"Group-like" Ethics</b><br />I'm going to be concerned with questions like "is having two people, each of whom live 50 years, equivalent to having one person who lives 100 years?" I don't require that this question be answered either "yes" or "no", but only that the question has <i>some</i> answer.<br /><br />So that this post doesn't take up a huge amount of space, I'm going to define the symbol $\oplus$ to mean "moral combination" and $=$ to mean moral equivalence, so the statement "two people, each of whom live fifty years, is equivalent to one person living 100 years" can be written as $$(50 \text{ years})\oplus(50 \text{ years})=100 \text{ years}$$ There are many different ways to define $\oplus$. For example, we might care <a href="http://en.wikipedia.org/wiki/John_Rawls">only about</a> the worst-off person - in this case $(50 \text{ years})\oplus(50 \text{ years})=50 \text{ years}$ as the worst-off person on the left-hand side of the equation has the same length of life as the worst-off person on the right. Alternatively, we might point out that quality of life degrades as you get older, so in fact maybe $(50 \text{ years})\oplus(50 \text{ years})=150 \text{ years}$ since the two young people get so much more joy out of their life. The World Health Organization <a href="http://www.who.int/quantifying_ehimpacts/publications/en/9241546204chap3.pdf">follows this</a> model and weights lives like this:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-pcUlOADhHwk/UVdNKBffsGI/AAAAAAAAAIU/kh8P-4YojV4/s1600/who+weights.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="127" src="http://1.bp.blogspot.com/-pcUlOADhHwk/UVdNKBffsGI/AAAAAAAAAIU/kh8P-4YojV4/s320/who+weights.png" width="320" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: left;">According to their formula, old age is so awful that $(40 \text{ years})\oplus(40 \text{ years})=125 \text{ years}$ and one person would have to live for thousands of years to be equivalent to two 50 year lifespans.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">In addition to requiring that statements like $(50 \text{ years})\oplus(50 \text{ years})$ have some answer, I will also require that there is an "identity", i.e. there is some quality of life such that adding a person with that quality of life doesn't change the overall value of the world. This is a reasonable assumption because:</div><div class="separator" style="clear: both; text-align: left;"></div><ol><li>Sometimes increasing the population is a good idea, i.e. there is some $y$ such that $x\oplus y > x$</li><li>Sometimes increasing the population is a bad idea, i.e. there is some $z$ such that $x\oplus z < x$</li><li>By the intermediate value theorem, there must therefore be some value which I'll call $0$ such that $x\oplus 0 = x$</li></ol><br /><div class="separator" style="clear: both; text-align: left;">Any ethical system which has an operation like $\oplus$ I will call "group-like" (although observant readers will note that I'm making fewer assumptions than what groups require - technically this is a "unital magma").</div><br /><b>"Utilitarian-like" Ethics</b><br />The classic definition of "utilitarianism" is to look only at happiness and to define $\oplus=+$, e.g. two people with five "units" of happiness is equivalent to one person with ten units of happiness.<br /><br />There are a plethora of "utilitarian-like" ethical theories which define $\oplus$ as being sort of like addition, but not really. For example, <a href="http://en.wikipedia.org/wiki/Utilitarianism#Negative_utilitarianism">negative utilitarians</a> would first discard any pleasure, and look only at the pain of each individual before doing the addition. <a href="http://en.wikipedia.org/wiki/Prioritarianism">Prioritarians </a>wouldn't completely disregard pleasure, but they would weight helping those in need more strongly. The <a href="http://en.wikipedia.org/wiki/List_of_countries_by_Sen_social_welfare_function">Sen social welfare function</a> weights income by inequality before doing the addition. And so on.<br /><br />I will describe an ethical system as "utilitarian-like" if it is equivalent to doing addition with some appropriate transformation applied first. Formally, utilitarian-like operations are of the form $x\oplus y = f(x)+f(y)$.<br /><br /><b>The Theorem</b><br />With these definitions in mind, we can state our theorem:<br /><blockquote class="tr_bq">The only ethical system which is both group-like and utilitarian-like is classical ("Benthamite") utilitarianism.</blockquote>Observant readers will notice that my examples in the "group-like" section were different than the examples in the "utilitarian-like" section. This theorem proves that this is not an accident.<br /><br /><i>Proof:</i> $x\oplus 0 = f(x)+f(0)$ so $x = f(x)+f(0)$ or to rewrite it another way, $f(x)=x - f(0)$ where $f(0)$ is some constant. This means that all group-like and utilitarian-like functions are equivalent, just shifted slightly. To use a formal definition of "equivalent", the homomorphism $\phi(x) = x + f(0)$ can be easily seen via the first isomorphism theorem to be an isomorphism $(\mathbb{R},\oplus)\to(\mathbb{R},+)$.<span style="background-color: white; font-family: sans-serif; font-size: large; line-height: 19.1875px;">∎</span><br /><br /><b>Discussion</b><br />The reason why Prioritarians et al. fail to be group-like is something I haven't seen discussed much in the literature: a lack of an identity element.<br /><br />For example, suppose $x\oplus y = f(x)+f(y)$ where $$f(x) = \left\{<br />\begin{array}{lr}<br />2x & x < 0\\<br />x & else \end{array} \right.$$ This is a negative utilitarian-type ethics which weights suffering (i.e. negative experience) more strongly.<br /><br />Consider a few possible worlds in which we add someone of utility 2:<br /><br /><ol><li>$-1\oplus 2 = 0$</li><li>$-2\oplus 2 = -2$</li><li>$-3\oplus 2 = -4$</li></ol><br />In the first case, adding someone of utility two improves the world. In the second, it keeps the world the same and in the third it makes the world worse.<br /><br />That negative utilitarianism requires this isn't immediately obvious to me, and I believe it to be a non-trivial result of using group theory.<br /><br /><b>Conclusion</b><br />We might view negative utilitarianism or prioritarianism as a form of "pre-processing". For example, we might say that painful experiences affect utility more than positive ones. But when it comes to comparing utility to utility, it must be "each to count for one and none for more than one" with all the <a href="http://en.wikipedia.org/wiki/Utilitarianism#Criticisms">counter-intuitive results</a> that implies.Xodaraphttp://www.blogger.com/profile/00235189388350960670noreply@blogger.com9tag:blogger.com,1999:blog-6172724226008713264.post-14215368721355045802013-02-02T13:24:00.001-08:002013-02-02T13:24:47.439-08:00Poverty and Plant-Based DietsForty years ago, Frances Moore Lappe wrote <a href="http://en.wikipedia.org/wiki/Diet_for_a_small_planet">Diet for a Small Planet</a>, a combination cookbook and food industry critique. In it, she pointed out that the grain we feed to livestock animals could instead be fed to hungry people.<br /><br />The <a href="http://en.wikipedia.org/wiki/2010%E2%80%932011_global_food_crisis">recent shock</a> in food prices has led to increased examination of food cost determinants, and the data provides interesting insights into how our diets can affect the lives of the world's poor.<br /><h3>The Numbers</h3>According to <a href="http://www.countinganimals.com/how-many-animals-does-a-vegetarian-save/">Counting Animals</a>, a vegetarian saves 29 chickens, 1/2 of a pig and an eighth of a cow each year. Using the formula developed by <a href="http://www.aae.wisc.edu/renk/library/Effect%20of%20Ethanol%20on%20Corn%20Price.pdf">Fortenbery and Park</a>, 9 million such vegetarians would reduce the price of corn by $5/bushel. Using this as a proxy for soy, food prices of the ten <a href="http://en.wikipedia.org/wiki/Staple_food">staple foods</a> would drop by 20%<sup>1</sup>. This corresponds<sup>2</sup> to the central scenario of <a href="http://www-wds.worldbank.org/external/default/WDSContentServer/IW3P/IB/2008/07/14/000158349_20080714104851/Rendered/PDF/WPS4666.pdf">Dessus et al.</a>,estimated to cause 233.2 million people to come out of absolute poverty (defined as living on less than $2/day). Using <a href="http://www.jpands.org/vol16no1/goklany.pdf">Goklany</a>'s estimates, this would avert 1.22 million deaths, and 42.7 million <a href="http://www.givewell.org/international/technical/additional/DALY">disability adjusted life-years</a>.<sup>3</sup><br /><br />To put it in personal terms: one vegetarian saves one human for every eight years they're veg, and averts four DALYs per year of vegetarianism.<br /><h3>Cost Effectiveness</h3><div>EAA has <a href="http://www.effectiveanimalactivism.org/top-animal-charities-and-climate-change">previously estimated</a> that the <a href="http://www.effectiveanimalactivism.org/Top-charities">top charities</a> create one vegetarian-year for around $11. This means that top veg charities save one person for $90, and spend around $2.75 to avert a DALY. For comparison, the Against Malaria Foundation, GiveWell's current top pick, spends <a href="http://www.givewell.org/international/top-charities/AMF#Costperlifesaved">$2,300 per life saved</a> or between <a href="https://sheet.zoho.com/open.do?docid=426675000000015003">$29 and $169/DALY</a>.<br /><br />Even with the generous padding that these rough calculations deserve, veg charities may be competitive with other poverty-focused charities.<br /><h4>Footnotes</h4></div><div>Code used to calculate these numbers can be found <a href="https://gist.github.com/4699242">here</a>.</div><div><ol><li>This would cause a drop in soy and corn prices of 63%. However, these foods make up only a third of total global staples, meaning that aggregate staple price would drop by only ~20% (ceteris paribus). Note that Fortenbery and Park's model probably wouldn't handle such a large change well, so this should be considered a very rough estimate.</li><li>Dessus and Goklany both examined the other direction: how many more people would enter poverty as the result of increased food prices. I assume here that the change is symmetric, i.e. the badness caused by an increase of $x is the same as the goodness caused by a decrease of $y</li><li>Goklany separates DALYs meaning "disability with no death" from actual deaths, in contrast to places like GiveWell, which usually include premature death in their DALY calculation.</li></ol></div>Xodaraphttp://www.blogger.com/profile/00235189388350960670noreply@blogger.com8tag:blogger.com,1999:blog-6172724226008713264.post-58784023074209070962012-12-31T17:12:00.000-08:002013-01-01T09:19:14.488-08:00Should Veg Advocates Use Health Arguments?There are occasionally debates about how we can best advocate for veganism. Usually these debates take place between ethical vegans, so (unsurprisingly) the conclusion is usually that ethical arguments are the best approach.<br /><br />For example, the <a href="http://www.amazon.com/The-Animal-Activists-Handbook-Maximizing/dp/1590561201/ref=sr_tc_2_1?ie=UTF8&qid=1357000129&sr=1-2-ent">Animal Activist's Handbook</a> calls health-based arguments "problematic" and urges readers to focus on ethics-based approaches. No less an authority than Mahatma Gandhi said in his book <a href="http://www.amazon.com/The-Moral-Basis-of-Vegetarianism/dp/B000KOH8SS/ref=sr_1_1?s=books&ie=UTF8&qid=1357000187&sr=1-1&keywords=the+moral+basis+of+vegetarianism">The Moral Basis of Vegetarianism</a>:<br /><blockquote>I notice also that it is those persons who become vegetarian because they are suffering from some disease or other - that is, from the purely health point of view - it is those persons who largely fall back. I discovered that for remaining staunch to vegetarianism a man requires a moral basis.</blockquote>Like a lot of marketing advice, these theories are usually justified by an appeal to intuition, and like most such appeals I suspect that they aren't well supported by the facts.<br /><br /><a href="http://agecon2.tamu.edu/people/faculty/capps-oral/agec%20635/Readings/Effects%20of%20Health%20Information%20and%20Generic%20Advertising%20on%20U.S.%20Meat%20Demand.pdf">A review</a> of US meat consumption found that health information (as measured by the number of articles published in medical journals about the bad effects of cholesterol) had a stronger effect on demand than even price changes. <a href="http://ageconsearch.umn.edu/bitstream/27141/1/35010143.pdf">A similar review</a> of Canadian meat consumption found that government recommendations to eat less meat appear to have a significant impact. Concerns about cholesterol have sent the demand for <a href="http://www.jstor.org/discover/10.2307/1242447?uid=3739256&uid=2&uid=4&sid=21101526019211">butter</a> and <a href="http://www.jstor.org/discover/10.2307/1243023?uid=3739256&uid=2&uid=4&sid=21101526019211">eggs</a> plummeting. As <a href="http://www.prwatch.org/prwissues/1997Q2/lyman.html">Oprah fans know</a>, information about the unhealthfulness of beef <a href="http://ageconsearch.umn.edu/bitstream/21648/1/sp99fl02.pdf">causes a huge drop</a> in beef consumption - without increasing the consumption of pigs or chickens.<br /><br />In a survey by the Vegetarian Journal, 82% of readers stated that they became vegetarian for health reasons, and among adolescents a vegetarian diet <a href="http://ethik.univie.ac.at/fileadmin/user_upload/inst_ethik_wiss_dialog/Perry__C_2001_Veg_Adolesc..pdf">seems to be linked</a> with a desire for weight control. This is confirmed by the Vegetarian Times' survey, which found that <a href="http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1022507/pdf/westjmed00069-0085.pdf">the majority of</a> self-described vegetarians do it for health reasons. In a psychological survey of the origins of vegetarianism, the authors <a href="http://ethik.univie.ac.at/fileadmin/user_upload/inst_ethik_wiss_dialog/Rozin__.._1997._Moralization_and_becoming_a_vegetarian_in__psychological_science.pdf">found that</a> slightly less than half of vegetarians originally quit eating meat for health reasons. Vegetarians of all stripes are <a href="http://smas.chemeng.ntua.gr/miram/files../publ_130_10_2_2005.pdf">significantly more likely</a> to be concerned about health aspects of their food.<br /><br />And we shouldn't think that someone who becomes veg*n for health reasons will be less committed. <a href="http://www.humanespot.org/content/model-process-adopting-vegetarian-diets-health-vegetarians-and-ethical-vegetarians">An attempt</a> to understand the process of becoming vegetarian found that slightly more than half the subjects were vegetarian for ethical reasons, but "health vegetarians became increasingly aware of animal welfare issues and this reaffirmed the transition." Indeed, the initial ethical/health distinction <a href="http://eprints.whiterose.ac.uk/3741/1/foxn1.pdf">seems to fade</a> over time as ethical vegetarians become more interested in health, and vice versa.<br /><br /><blockquote><i>Unspeakably more depends on what things are called, than on what they are.</i> - Friedrich Nietzsche</blockquote><br />It's critically important to consider here too the benefit gained from advocacy that is not "vegan advocacy." You probably have heard of <a href="http://en.wikipedia.org/wiki/Pink_slime">pink slime</a>, a filler used in ground beef. The public outcry sent <a href="http://www.reuters.com/article/2012/04/04/livestock-markets-cme-idUSL2E8F424I20120404">beef prices plummeting</a>, causing at least one producer to <a href="http://abcnews.go.com/blogs/headlines/2012/04/pink-slime-maker-afa-files-for-bankruptcy/">declare bankruptcy</a>. Several lawsuits regarding <i>E. Coli</i>-infected beef caused Topp's Meat Company to <a href="http://www.marlerblog.com/legal-cases/topps-files-for-bankruptcy-after-massive-beef-recall/">file for Chapter 11</a> a few years ago. The Hallmark/Westland Meat Packing Company <a href="http://en.wikipedia.org/wiki/Hallmark/Westland_Meat_Packing_Company">went bankrupt</a> after an investigation by the Humane Society of the US caused the largest beef recall in history - not because of animal cruelty violations (which were horrendous), but because of health concerns.<br /><br />Bruce Schneier <a href="http://www.schneier.com/blog/archives/2007/05/rare_risk_and_o_1.html">has said</a> that no one should be concerned by what's on the news - if it's newsworthy, it's by definition unusual, hence it almost certainly won't affect you. This is a fact which a lot of advocates seem to forget. Pink slime is <a href="http://grist.org/factory-farms/pink-slime-is-the-tip-of-the-iceberg-look-what-else-is-in-industrial-meat/">probably no worse</a> than any other type of meat, yet some combination of branding, luck and timing caused tremendous economic damage to the beef industry. Similarly, your chance of dying from E. Coli <a href="http://en.wikipedia.org/wiki/2011_E._coli_O104:H4_outbreak">even during an "outbreak"</a> compares favorably with that of being struck by lightning, yet we find massively expensive recalls happening on an <a href="http://www.food-poisoning-blog.com/">almost weekly basis</a>.<br /><br />So we have to be extremely careful when evaluating things like <a href="http://www.theveganrd.com/2012/01/should-you-go-vegan-to-get-skinny.html">the evidence</a> that vegan diets help with long-term weight loss. They stack up <a href="http://health.usnews.com/best-diet/best-weight-loss-diets">pretty well</a> when compared to the competition, but the fact that they aren't overwhelmingly better than anything else doesn't necessarily mean that the health argument <a href="http://www.theveganrd.com/2010/11/how-the-health-argument-fails-veganism.html">fails veganism</a>.<br /><br />Maybe health benefits aren't the best way to present veganism. Certainly there is a subgroup of people that is more responsive to ethical arguments than health ones, and we have to be careful about change which moves people from one type of animal consumption to another (although the evidence seems to indicate that this is less of a problem than one might think). But I hope I've convinced you that this is not something which can be decided by navel-gazing - it needs to be decided empirically, by doing surveys, handing out pamphlets and measuring what works.<br /><br />If you are interested in learning more about health-based arguments for veganism, <a href="http://pcrm.org/">PCRM</a> is a good place to start.Xodaraphttp://www.blogger.com/profile/00235189388350960670noreply@blogger.com1tag:blogger.com,1999:blog-6172724226008713264.post-84767702187499742142012-11-25T13:28:00.001-08:002012-11-27T06:06:33.479-08:00Don't "Raise Awareness"<span style="background-color: white; color: #222222; font-family: arial, sans-serif; font-size: 13px;">In </span><a href="http://www.envplan.com/abstract.cgi?id=a301445" style="background-color: white; color: #1155cc; font-family: arial, sans-serif; font-size: 13px;" target="_blank">an early analysis</a><span style="background-color: white; color: #222222; font-family: arial, sans-serif; font-size: 13px;"> of efforts to convince people to act more pro-environmentally, Burgess et al. present the following flowchart on how people's minds change:</span><br /><div style="background-color: white; color: #222222; font-family: arial, sans-serif; font-size: 13px;"><br /></div><div style="background-color: white; color: #222222; font-family: arial, sans-serif; font-size: 13px; text-align: center;"><img alt="http://2.bp.blogspot.com/--HiFtWz9Qsk/ULJz_c4jGoI/AAAAAAAAAHw/qx7nSYqYHIw/s320/flowchart.png" src="http://2.bp.blogspot.com/--HiFtWz9Qsk/ULJz_c4jGoI/AAAAAAAAAHw/qx7nSYqYHIw/s320/flowchart.png" /></div><div style="background-color: white; color: #222222; font-family: arial, sans-serif; font-size: 13px;"><br /></div><div style="background-color: white; color: #222222; font-family: arial, sans-serif; font-size: 13px;">It seems pretty straightforward. It's also completely wrong.</div><div style="background-color: white; color: #222222; font-family: arial, sans-serif; font-size: 13px;"><br /></div><div style="background-color: white; color: #222222; font-family: arial, sans-serif; font-size: 13px;">In a later <a href="http://www.ecocreditz.com.au/downloads/379819/Mind+Gap+Kollmuis+and+Agyeman.pdf" style="color: #1155cc;">metasurvey</a>, Kollmuss and Agyeman say:</div><blockquote style="background-color: white; color: #222222; font-family: arial, sans-serif; font-size: 13px;">These models from the early 1970s were soon proven to be wrong. Research showed that in most cases, increases in knowledge and awareness did not lead to pro-environmental behavior. Yet today, most environmental Non-governmental Organisations (NGOs) still base their communication campaigns and strategies on the simplistic assumption that more knowledge will lead to more enlightened behavior. </blockquote><div style="background-color: white; color: #222222; font-family: arial, sans-serif; font-size: 13px;">Problems go even further. Kollmuss and Agyeman add that "quantitative research has shown that there is a discrepancy between attitude and behavior." <a href="http://www.acrwebsite.org/search/view-conference-proceedings.aspx?Id=6419" style="color: #1155cc;" target="_blank">Wong and Sheth</a> agree, saying that the relationship between beliefs and behavior is generally found to be "low and nonsignificant."</div><div style="background-color: white; color: #222222; font-family: arial, sans-serif; font-size: 13px;"><br /></div><div style="background-color: white; color: #222222; font-family: arial, sans-serif; font-size: 13px;"><a href="http://www.gallup.com/poll/8461/public-lukewarm-animal-rights.aspx" style="color: #1155cc;" target="_blank">25% of Americans</a> tell pollsters that "Animals deserve the same rights as people," yet only <a href="http://www.gallup.com/poll/156215/consider-themselves-vegetarians.aspx" style="color: #1155cc;" target="_blank">2% are vegan</a>. Unless a quarter of Americans believe it's ok to torture humans to death for their flesh, that's a pretty big gap between beliefs and behavior.</div><div style="background-color: white; color: #222222; font-family: arial, sans-serif; font-size: 13px;"><br /></div><div style="background-color: white; color: #222222; font-family: arial, sans-serif; font-size: 13px;"><a href="http://en.wikipedia.org/wiki/Henry_Spira" style="color: #1155cc;" target="_blank">Henry Spira</a>, one of the most effective animal advocates of all time, noted this problem in his <a href="http://www.theveganrd.com/2009/10/ten-tips-for-animal-activists-based-on-the-life-of-henry-spira.html" style="color: #1155cc;" target="_blank">list of tips for advocates</a> when he disparaged "raising awareness". It's very easy to convince ourselves that we're building "mindshare" even if people's behaviors don't change, but without the explicit measurement of the sort that EAA's <a href="http://www.effectiveanimalactivism.org/Top-charities" style="color: #1155cc;" target="_blank">Top Charities</a> do we're probably just building castles in the air.</div>Xodaraphttp://www.blogger.com/profile/00235189388350960670noreply@blogger.com8tag:blogger.com,1999:blog-6172724226008713264.post-13719924017550645032012-11-03T11:21:00.000-07:002012-11-04T07:18:45.811-08:00Kill the Young People<script src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML" type="text/javascript">MathJax.Hub.Config({ tex2jax: {inlineMath: [['$','$'], ['\\(','\\)']]} }); </script> <br /><br />Suppose you were forced to choose between killing someone today, and killing someone a century from now. Which would you choose?<br /><br />To be clear: the two people are exactly the same - equally happy, healthy, etc. And the effects on others are the same, and there is no uncertainty involved. The only difference between the two murders is when they occur.<br /><br />It seems hard to give a justification for why one is better than the other, and in the landmark <a href="http://www.amazon.com/The-Economics-Climate-Change-Review/dp/0521700809">Stern Review on Climate Change</a> the eponymous Nicholas Stern said as much. In his <a href="http://www.publications.parliament.uk/pa/cm200708/cmselect/cmtreasy/231/23105.htm#a11">interrogation by Parliament</a>, he stated that to choose one over the other is to "discriminate between people by date of birth," a position that is "extremely hard to defend".<br /><br />There is a <a href="http://en.wikipedia.org/wiki/Stern_Review#Discounting">lot of controversy</a> about whether he made the right decision, mostly motivated by the fact that even slightly different decisions on how we "discount" the future can cause huge differences in how we respond to threats which will kill people in the future, like climate change.<br /><br />A remarkable proof by Peter Diamond shows that, under some reasonable assumptions, we should indeed "discriminate by date of birth," and choose to kill the person a century from now.<br /><br /><h2>Diamond's Proof</h2><br />The full proof (and several others) can be found in his paper <a href="http://folk.uio.no/gasheim/zDia1965.pdf">The Evaluation of Infinite Utility Streams</a>, but I'll present a simplified version here.<br /><br />First, some notation. We denote welfare over time as a list, e.g. $(1,2,3)$ indicates that at time 1 all sentient persons have utility 1, and time 2 they have utility 2 and so forth. Because time is infinite, these lists are infinitely long. We denote infinite repetition with "rep", e.g. $1_{rep}$ is the list $(1,1,1,\dots)$. These lists are given variable names - I use $u,v$ for finite lists and $X,Y$ for infinite lists - and they are compared with the standard inequality symbols ($>,\geq$).<br /><br />There are four assumptions:<br /><br /><ol><li>If $u\geq v$ then $u_{rep} \geq v_{rep}$. I.e. if some finite list of utilities $u$ is better than some other finite list $v$, then repeating $u$ for all of eternity is better than repeating $v$ for all of eternity.</li><li>If $u\geq v$ then $(u,X)\geq (v,X)$. I.e. if $u$ is better than $v$, starting off the world with $u$ is better than starting things off with $v$, given that the rest of time is equal.</li><li>If $X\geq Y$ then $(u,X)\geq (u,Y)$. I.e. if some infinite state of affairs $X$ is better than $Y$, starting them both off with $u$ won't change that.</li><li>If $u$ is the same as $v$ except some people are better off (and no one is worse off), then $u > v$. (This is sometimes known as <a href="http://en.wikipedia.org/wiki/Pareto_efficiency">Pareto efficiency</a>.)<br /></li></ol><b>Proof:</b> The proof actually isn't that complicated, but it looks intimidating because the notation is probably unfamiliar.<br /><br />Suppose, for the sake of contradiction, that $(1,2)_{rep}\geq (2,1)_{rep}$. By (A4), $(2,2,(1,2)_{rep}) > (2,1)_{rep}$ since $(2,2,(1,2)_{rep})$ is the same as $(1,2)_{rep}$, except with people being better off at $t = 1$. By rearrangement, $(2,2,(1,2)_{rep})=(2,(2,1)_{rep})$ and $(2,1)_{rep}=(2,(1,2)_{rep})$ so $(2,(2,1)_{rep}) > (2,(1,2)_{rep})$. But we had assumed that $(1,2)_{rep}\geq (2,1)_{rep}$ which by (A3) means that $(2,(1,2)_{rep}) \geq (2,(2,1)_{rep})$. We've reached a contradiction, meaning that $(1,2)_{rep} < (2,1)_{rep}$.<br /><br />This means that $(1,2) < (2,1)$, for if the opposite were true, $(1,2)_{rep} \geq (2,1)_{rep}$ by (A1). Therefore, by (A2), $(1,2,X) < (2,1,X)$, meaning that if we could shift happiness from year two to year one, we should. <br /><br />We should value the happiness of those born earlier more than those born later, and kill the person living a century from now.<br /><h2>Discussion</h2><br />My girlfriend pointed out to me that the reason why we interpret this theorem to mean that people born earlier matter more is because we assume time has a beginning but no end. If we assumed the opposite, then people born later would matter more.Xodaraphttp://www.blogger.com/profile/00235189388350960670noreply@blogger.com4tag:blogger.com,1999:blog-6172724226008713264.post-56468239449311712992012-07-07T09:30:00.000-07:002012-07-07T09:30:57.336-07:00Should you walk or run when it's hot out?<script src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML" type="text/javascript">MathJax.Hub.Config({ tex2jax: {inlineMath: [['$','$'], ['\\(','\\)']]} }); </script> <br /><br />Thanks to global warming, I've gotten the pleasure of walking to work in excruciatingly hot conditions recently. As I was walking to my office yesterday, I started thinking "it's so hot - I should hurry inside to get to the air conditioning." But of course, if I hurried then I would burn more energy and heat myself up even further!<br /><br />Despite the <a href="http://www.nytimes.com/2006/10/24/health/24real.html/partner/rssnyt/">wide body of literature</a> available on whether one should walk or run in the rain, I couldn't find any suggestions on whether to walk or run in the heat. So I decided to do some investigating.<br /><br />For a 150 pound person, walking at three miles per hour (a moderate pace) burns 230 Calories per hour. Walking at four miles per hour (a brisk walk) burns 350 Calories, and five miles per hour (a near run / slow jog) burns 544.<br /><br />In 30<span style="background-color: white; font-family: sans-serif; font-size: 13px; line-height: 19px;">°</span><span style="background-color: white;">C temperature (~85</span><span style="background-color: white; font-family: sans-serif; font-size: 13px; line-height: 19px;">°F</span><span style="background-color: white;">)</span><span style="background-color: white;">, the sun gives off 400 Watts per square meter of heat from solar radiation, about half of which is immediately reflected (depending on your skin color and clothing). While most peoples' <a href="http://en.wikipedia.org/wiki/Body_surface_area">body surface area</a> is around 1.8m<sup>2</sup>, only about half of your body will be directly receiving the sunlight, so we'll assume that you're getting ~200 Watts of power from the sun, which equates to 172 Calories per hour.</span><br /><span style="background-color: white;"><br /></span><br /><span style="background-color: white;">Lastly, we need to consider convection (or "wind chill"). The "wind chill factor" can be approximated as $8.3\sqrt{w}$, with $w$ the wind speed in meters per second. Skin temperature stays fairly constant around 34</span><span style="background-color: white; font-family: sans-serif; font-size: 13px; line-height: 19px;">°</span><span style="background-color: white;">C, meaning that in our </span><span style="background-color: white;">30</span><span style="background-color: white; font-family: sans-serif; font-size: 13px; line-height: 19px;">°</span><span style="background-color: white;">C weather there will be a temperature differential of 4 degrees. Change in heat is the product of the temperature difference, wind chill factor and body surface area. For a standard person, this gives us $$\frac{dQ}{dt}=8.3\times \sqrt{w}\times 4 \times 1.8 = 59.8\sqrt{w}$$ which we can integrate to find our total heat loss.</span><br /><span style="background-color: white;"><br /></span><br /><span style="background-color: white;">If you walk at three miles an hour, you will gain 230 C/hr from the exercise, and spend 1/6 of an hour outside to walk 1/2 of a mile. While outside, you will gain 172 C/hr from the sun, but lose 60 C/hr from convection, for a total gain of 112 C/hr. If we multiply this all out, we find that you will gain a total of 57 Calories of heat from your exercise.</span><br /><span style="background-color: white;"><br /></span><br /><span style="background-color: white;">Repeating the calculations gives us that you increase by 56 Calories when walking at four miles per hour, and 64 Calories when walking at five.</span><br /><span style="background-color: white;"><br /></span><br /><span style="background-color: white;">So my recommendation is what you would probably expect: a brisk walk will cool you off a little, but don't jog or you'll just end up soaked in sweat.</span><br /><span style="background-color: white;"><br /></span><br /><span style="background-color: white;">(All numbers and formulae taken from <a href="http://www.springer.com/physics/biophysics+%26+biological+physics/book/978-3-540-29603-4">Physics of the Human Body</a>. See <a href="https://gist.github.com/3067074">this gist</a> for a small octave/matlab script if you want to play around with the numbers. It would be interesting to consider the effects of humidity and higher temperatures, if you're up for it.)</span>Xodaraphttp://www.blogger.com/profile/00235189388350960670noreply@blogger.com1tag:blogger.com,1999:blog-6172724226008713264.post-70330394343953934772012-06-17T15:44:00.000-07:002012-06-17T15:44:54.882-07:00Animal Welfare is Animal AbolitionGary Francione is known for promoting what he calls the "<a href="http://www.abolitionistapproach.com/">abolitionist approach</a>" to animal rights. Essentially, this means he is against improving the welfare of farm animals via legislation, viewing this as ineffective or even counter-productive. Instead, he argues that we should be unequivocal that veganism is the "moral baseline" of mankind. <br /><br />The name comes from slavery: "abolitionists" were those who argued that any form of slavery was immoral, and must be eradicated immediately. This is opposed to general "anti-slavery activists" who wanted a gradual phase out of some, or perhaps all, slavery.<br /><br />An under-appreciated point is that very few anti-slavery activists were actually abolitionists, and the term was more often used to slander the Republican party. Just as Obama increased government funding for health care and was labeled a "socialist," Abraham Lincoln ran on a platform of slightly improving the well-being of slaves and was labeled an "abolitionist." (He supported, for example, the <a href="https://en.wikipedia.org/wiki/Corwin_Amendment">Corwin Amendment</a> which would have amended the Constitution to ensure slavery in states where it already existed - hardly an abolitionist stance.)<br /><br />Even when Lincoln had the war-time power to issue "Executive Orders" (i.e. laws which didn't have to pass Congress), his famous <a href="https://en.wikipedia.org/wiki/Emancipation_Proclamation">Emancipation Proclamation</a> did not, despite your 8<sup>th</sup> grade history textbook, made slavery illegal. It only freed about 75% of current slaves, and said nothing about the practice of slavery itself. (Slavery was later made illegal by the 13<sup>th</sup> amendment.)<br /><br />The moral of this history lesson is that change is slow, and often requires making deals with the devil.<br /><br />But I digress. The point I wanted to make is that there really is no way to work towards improving animal welfare without working towards abolition. This comes from what economists call the <a href="https://en.wikipedia.org/wiki/Law_of_demand">Law of Demand</a>:<br /><br /><blockquote>Consumers buy more of a good when its price decreases and less when its price increases.</blockquote><br />This law is self-evident enough that I don't think it requires any argumentation. To make clear its relevance, we can rewrite it:<br /><br /><blockquote>Consumers buy less meat when its price increases, due to increased regulations in the form of larger cages, more frequent veterinary inspections, etc.</blockquote><br />We can in fact directly calculate the "abolition coefficient" of a welfare change. The <a href="https://en.wikipedia.org/wiki/Elasticity_%28economics%29">elasticity of demand</a> is a measure of how demand for a good changes relative to its price. Because we live in a country bloated with agricultural subsidies, armies of economists are employed to predict how demand for various animal products changes with price. Here's a table taken from <a href="http://www.blogger.com/naldc.nal.usda.gov/download/CAT86866758/PDF">Huang</a>:<br /><br /><table><tbody><tr><th>Good</th><th>Elasticity</th></tr><tr><td>Beef</td><td>-0.61</td></tr><tr><td>Pork</td><td>-0.73</td></tr><tr><td>Chicken</td><td>-0.53</td></tr><tr><td>Turkey</td><td>-0.68</td></tr><tr><td>Eggs</td><td>-0.14</td></tr></tbody></table><br />Elasticity is a measurement of percent change. So for beef, a 10% increase in price leads to 6.1% fewer cows on factory farms. <br /><br />The precise calculation of how much a given law will change the lives of animals is of course complex. But the important thing to note is that these elasticities are always negative - more expensive meat always leads to less meat. Always.<br /><br /><i>(An important consideration is to wonder if increasing the price of chicken will just drive people to eating beef instead, a rather dubious gain. You can see from e.g. <a href="http://dare.agsci.colostate.edu/skoontz/arec510/papers/eales%20unnevehr%20%28ajae%201988%29.pdf">Eales and Unnevehr</a> that, while this does happen to some degree, people do substitute vegetables for meat, meaning that there is a real gain.)</i>Xodaraphttp://www.blogger.com/profile/00235189388350960670noreply@blogger.com4tag:blogger.com,1999:blog-6172724226008713264.post-79890868182677145142012-03-25T12:14:00.000-07:002012-03-25T12:14:55.261-07:00Why there must be universal grammar<p>The guardian ran <a href="http://www.guardian.co.uk/technology/2012/mar/25/daniel-everett-human-language-piraha?newsfeed=true">an interview</a> with <a href="http://en.wikipedia.org/wiki/Daniel_Everett">Daniel Everett</a> yesterday. Everett is a linguist most famous for his claim that <a href="http://en.wikipedia.org/wiki/Universal_grammar">universal grammar</a> (the belief that some rules of grammar are "hard wired" into the brain) as popularized by Chomsky, is false. Specifically, he believes that the <a href="http://en.wikipedia.org/wiki/Pirah%C3%A3_language">Pirahã language</a> lacks <a href="en.wikipedia.org/wiki/Recursion#Recursion_in_language">recursion</a>.</p> <p>His claims are quite controversial, but one thing which is worth mentioning is that universal grammar is (for a reasonable definition of "proof") provably correct. By this I mean:</p> <blockquote>Theorem: Learning grammar is so hard that the only way humans (or anyone) can do it is if they have innate structures.</blockquote> <p>This is related to Chomsky's <a href="http://en.wikipedia.org/wiki/Poverty_of_the_stimulus">poverty of the stimulus</a> argument.</p> <p>It can be proven in the following way: suppose we restrict ourselves to just the subset of English sentences consisting only of nouns and verbs. "I like John" and "You are here" would be two examples. These both follow the pattern "noun verb noun". A sentence like "jump run you" is non-grammatical, because "verb verb noun" is not an acceptable pattern in English.</p> <p>Now let's consider how long it would take a learner to learn these patterns. There are 2<sup>3</sup> = 8 possible patterns of length three, so if a learner thinks they're all possible, it will have to test out all eight of them. ("Mommy, is 'jump run you' a sentence?")</p> <p>Most sentences have much more than three words of course, so a learner will need to test out the 2<sup>4</sup> = 16 four word patterns, the 2<sup>5</sup> = 32 five word patterns, etc. In general, there are 2<sup>n</sup> possible sentences with <i>n</i> words, meaning that the number of tests that the learner will need to run is exponential in the number of words.</p> <p>The <a href="http://en.wikipedia.org/wiki/Cobham-Edmonds_thesis">Cobham-Edmonds thesis</a> states that any problem which takes exponential time is, in practice, unsolvable.</p> <p>Why is this true? There are, depending on your definition of "part of speech", about 20 parts of speech in English. If you tested one grammar per second, it would take you about a month to learn all the five word grammars. The six word grammars would take you two years, and you would be forty before you learned all the seven word grammars. That last sentence had 22 words, and it would take you 10<sup>21</sup> years to test all of the 22-word-grammars. The universe is only 10<sup>13</sup> years old.</p> <p>So who knows whether all languages are recursive. But it seems unlikely that human children consider all possible grammars equally. They must use some shortcuts and those shortcuts must, by definition, be innate.</p>Xodaraphttp://www.blogger.com/profile/00235189388350960670noreply@blogger.com6tag:blogger.com,1999:blog-6172724226008713264.post-79414326369657883892012-03-11T20:55:00.000-07:002012-03-11T20:55:40.945-07:00Thoughts on Sokal v. Lynch<p>The New York Times ran <a href="http://opinionator.blogs.nytimes.com/2012/03/11/defending-science-an-exchange/">a debate</a> between Sokal (of <a href="http://en.wikipedia.org/wiki/Sokal_affair">Sokal affair</a> fame) and Lynch regarding the underpinnings of science, apparently sparked by Rick Perry's denial of evolution. I've read several "why science is better than religion" things like this, and none of them ever give what I see as the obvious proof, so I'd like to contribute it here.</p> <p>If you have some theory which works 10% of the time, and you do one experiment, there's a 10% chance you'll falsely believe your theory is good. Do two experiments, and that probability drops to 1%. Three, four, ..., N experiments later, and the likelihood that you'll have seen all false positives is vanishingly small.</p> <p>Another way of putting this is: the <a href="http://en.wikipedia.org/wiki/Law_of_large_numbers">law of large numbers</a> says that, if you do a large number of experiments, you'll tend towards the right answer. If evolution is supported by vast amounts of evidence, the probability of it being wrong is so small as to be inconsequential. This has nothing to do with experimental science, it's just a mathematical fact. QED.</p> <p>I guess Prof. Lynch will tell me that the mathematical assumptions which underlie the law of large numbers are just as suspect as the assumption that the bible is infallible. Maybe, but it strikes me that few fundamentalists are claiming that 2 + 2 = 5, indicating that much progress could be made by making clear the mathematical foundations of science.</p> <p>I'll leave you with what I think is Sokal's best argument (tragically not in that op-ed):</p> <blockquote>Anyone who believes that the laws of physics are mere social conventions is invited to try transgressing those conventions from the windows of my apartment. (I live on the twenty-first floor.)</blockquote>Xodaraphttp://www.blogger.com/profile/00235189388350960670noreply@blogger.com1