The New Pangloss

Life expectancy in the late paleolithic was 33 years; today it is 67. So we would expect (ceteris paribus) that "natural" things are unhealthy. Instead, the tag line "All-Natural" is almost always a claim that the "All-Natural" product is healthy.

This argument tends to misunderstand a basic evolutionary principle:

  • Evolution selects for gene propagation, not health

Everyone is probably familiar with extreme examples, like black widow spiders who practice sexual cannibalism. A canonical example of evolution's imperfection in humans is the appendix, whose function may be largely vestigial. The appendix shows the dangers of basing our actions on "evolution": even if we evolved an appendix so that our primate ancestors could better digest leaves, that doesn't necessarily imply that eating those leaves is the healthiest thing to do.

At the other end of the spectrum are the numerous fad diets based around the proposition that humans evolved to eat some certain amount or type of meat. But why should the evolutionary benefits of eating meat be related to health?

This was brought up by Jared Diamond in his book Why Is Sex Fun?, based largely on the paper "Why do Men Hunt?" by Hawkes, et al. They note that, as far as anyone can tell, hunter-gatherer societies would be better off (more and healthier kids) if they were just gatherer societies; that is, if men didn't hunt.1 So why do men hunt even though it hurts not just themselves but their children? Because if you bring down a big animal, you can trade that for extramarital sex, and it's better to have lots of extramarital kids, some of whom might starve to death, than to devote all your time to making sure a few intramarital kids live.

That's probably not something you'll find in the average diet book.

Voltaire parodied the idea that there is a compassionate god ensuring that natural events are good. Were Voltaire alive today, Candide might instead have poked fun at pseudo-Darwinians who claim that "Mother Nature" looks out for us.

  1. Note that "gathering" in this case can mean "hunting" small animals and fish, so it's not that the nutritional value of meat is greater. It might be better phrased as "Why do men hunt large animals instead of small ones?" There are several other possible objections that Diamond addresses in that book, which I won't repeat here.

Cost-Effective Ways to Fill Your Stomach

There is a large amount of fad dietary advice regarding which foods will fill you up. Holt et al. had an idea which was elegant in its simplicity: take a ton of people, and force them to eat some foods. A few hours later, place them in front of a buffet. The more people eat from the free buffet, the less full they must be.

There are some interesting confounds - for example, jelly beans seem to really fill people up. The researchers surmised that eating such large numbers of jelly beans made people too sick to keep eating.

Nonetheless, this gives us a nicely empirical measure of how full foods make you. We can then figure out what the cheapest food to eat is (assuming a full stomach is your goal).

FoodCost of 100 units of satiation (cents)Cost of 100 calories (cents)
Ground Beef2036
Baked Beans1831
Wheat Bread918
Potatoes (Red)826
White Pasta78
Whole Grain Pasta59


I used Peapod to find the prices of everything. In the event that there were multiple products in a category, I chose the cheapest. The original paper was not terribly specific (is "beef" ground beef? lean? steak?) but there was not a large difference between price per calorie of different types of food at the very cheap end, so this shouldn't matter too much. The only significant issue was that cod is vastly more expensive than anything else (probably due to overfishing); if any potential experimenters are out there, I would be interested in seeing the satiety index of cheaper fish like tuna. (Although if canned tuna were as satiating as cod, it would have a rating here of 44, which would still make it twice as expensive as beef!)

Using the price and nutritional information, I calculated the price per calorie. The satiety index tells us the satiety per calorie. Using (Price / Calorie) * (Calorie / Satiety) we can find Price / Satiety. This is the number presented here. Numbers are hundredths of cents per unit of satiety.

Raw data can be found here.


Unsurprisingly, vegetable-based foods are much cheaper. A less obvious conclusion is that broad groups of foods do not have the same cost effectiveness - for example, baked beans cost almost as much as beef, whereas lentils are half beef's cost.

A frugal meal of rice and lentils has three times the cost effectiveness of a meal of beef, and is an order of magnitude more effective than anything involving fish.

Do Non-human Animals Feel Pain?

A lot of speculation on non-human pain is through analogy. This table lists six things which seem to be important in human pain perception; by analogy, it seems that any being which also meets these criteria feels pain to some extent.

This table is an update of Varner's 1998 version. The most significant ambiguity in this table is in the insects, as there is some evidence that pain perception is different across insects with exoskeletons versus those with endoskeletons. I marked "some" insects as having nociceptors, as I was unable to find anyone who would make the categorical statement "All insects have nociceptors."

A "Y" means yes, blank means no, and question mark means the evidence is equivocal.

Invertebrates Vertebrates
Earthworms Insects Cephalopods Fish Amphibians / Reptiles Birds Mammals
Has Nociceptors? ? Some3-6 ? Y2 Y1 Y Y
Central Nervous System? Y Y Y Y Y
Nociceptors Connected to CNS? Y Y Y Y Y
Has Endogeneous Opiods? Y Y ? Y Y Y Y
Response Affected By Pain-killers? ? ? ? Y7,8 Y8,9 Y Y
Behavioral Pain Response? Y Y Y Y Y
Unless otherwise stated, all data comes from Varner, G. E. In nature's interests?: Interests, Animal Rights, and Environmental Ethics. Oxford University Press, USA, 2002. Other sources given by superscripts:
  1. Allen, C. “Animal pain.” Noûs 38.4 (2004): 617-643. Print.
  2. Sneddon, L. U., V. A. Braithwaite, and M. J. Gentle. 2003. Do fishes have nociceptors? Evidence for the evolution of a vertebrate sensory system. Proceedings of the Royal Society of London. Series B. Biological sciences 270: 1115-1121.
  3. Pastor, J., B. Soria, and C. Belmonte. 1996. Properties of the nociceptive neurons of the leech segmental ganglion. Journal of Neurophysiology 75: 2268-2279.
  4. Wittenburg, N., and R. Baumeister. 1999. Thermal avoidance in Caenorhabditis elegans: an approach to the study of nociception. Proceedings of the National Academy of Sciences of the United States of America 96: 10477-10482.
  5. Illich, P. A., and E. T. Walters. 1997. Mechanosensory neurons innervating Aplysia siphon encode noxious stimuli and display nociceptive sensitization. The Journal of Neuroscience 17: 459-469.
  6. Tracey, J., W. Daniel, R. I. Wilson, G. Laurent, and S. Benzer. 2003. painless, a Drosophila gene essential for nociception. Cell 113: 261-273.
  7. Sneddon, L. U. “The evidence for pain in fish: the use of morphine as an analgesic.” Applied Animal Behaviour Science 83.2 (2003): 153-162. Print.
  8. Machin, K. L. “Fish, amphibian, and reptile analgesia..” The veterinary clinics of North America. Exotic animal practice 4.1 (2001): 19. Print.
  9. Machin, K. L. “Amphibian pain and analgesia.” Journal of Zoo and Wildlife Medicine 30.1 (1999): 2-10. Print.

Why these criteria?

Whatever the qualitative experience of pain comes from, it seems almost tautologically true that you first need to sense a stimuli in order to find it painful. So the requirement for nociceptors (or a direct analogue) is relatively straightforward.

The requirement of a centralized nervous system is probably equally straightforward at first glance: in order to be in pain, there needs to be a singular, distinct entity who is in pain. Varner notes that "Insects do not favor damaged limbs or become less active after internal injuries," implying that the "pain" insects feel, if any, is heavily localized.

If you have a mechanism to regulate pain, whether endogenous (i.e. created by the body) or through external drugs, that seems to indicate that the species has evolved not just to feel pain, but also to end pain that it feels. (Implying that not only is pain there, but it can also in some circumstances be "bad.")

Most theories for the evolution of pain assume that at least part of its function is to move away from noxious stimuli. Therefore, a lack of response seems to suggest a lack of pain sensing.


Evidence indicates that most of what we colloquially think of as "animals" feel pain. For at least cephalopods, Machin suggests that absence of evidence does not imply evidence of absence; there is stronger evidence that the lack of evidence for insects and worms is significant.

Lucene Performance

I hope here to give a brief overview of the theoretical complexity of various Lucene operations. It has been my experience that the practical performance is in line with the theoretical performance, for most practical values, but your mileage may vary. (Some of the latex requires Tex the World in order to render.)

Finding a node in a skip list takes log n operations. Since Lucene's term index is a skip list of unique terms, the query "do any documents contain term t?" will take log(number of unique terms). The query "find all documents containing t" will take log(# of unique terms) + number of matches. If your index is sharded (i.e. your index is not optimized) you will need to multiply these formulae by the number of segments, si
nce each segment has to be searched independently.

The real optimizations of Lucene come from the fact that you never search for all documents which match the query, but rather the top k. Call T the number of unique terms in your index. Say you have s sub-queries. Finding the s posting streams (one per query) takes s log T (since finding a posting stream is just finding that term in the skip list). Now suppose there are p instances of each term in your query, i.e.

Finding the top k matches can be done in p log k , so the entire search runs in
, which is almost always going to be dominated by p log k[1]. This means that 10 queries, each matching 10 documents, takes about as long as 1 query matching 100 documents (assuming k >= 100, of course). To put it another way: for standard disjunctive (OR'd) queries, the number of clauses doesn't really affect performance, except to the extent that more documents are potential matches.

This answers another common question about Lucene queries: what is the performance of "special" query types like range and prefix? The answer is that these queries add an additional penalty proportional to T, since they have to enumerate through the term index to find all terms which are in their range. However, the time actually finding documents in that range is not affected (although obviously the user will view it all as "query time").

If you've investigated Lucene performance on the indexing side of things, you know the key variable is "merge factor", or the maximum number of partitions the index can have. More partitions means faster writes and slower reads.

Say you have d documents, T unique terms and your memory can hold b documents at once. Then indexing can be done in [; d^2 log T / 2b ;], which is dominated by [;d^2;]. But this is if you have only one partition. If you have p partitions, indexing can be done in (get ready for this, cause it's a doozy):

(Note that in the case where p=1, this reduces to [;d^{1+1/p} ;] or [;d^2;], which is what our original formula was dominated by)[2]. The key term there is again [;d^{1+1/p};] you can see that going from one partition to 10 causes indexing time to fall from [;d^2;] to [;d^{1.1};], and as the number of partitions increases, performance becomes linear (which isn't too surprising if you think about it. If p = d, then adding a document is just appending to a list, which is constant, and managing the segments file, which is linear.)

But what about the other side of things? If p = d and it's a linked list, then indexing time is linear but so is search time (which is a huge degredation from the above stated p log k). In practice, p is always much less than d (the operating system will probably throw "too many open files" if p is even in the double digits unless you're using cfs). Ignoring practical considerations though, you can essentially assume you have to run the query for each individual partition, so the number of partitions (roughly) linearly increases the time.

  2. Lester, N., A. Moffat, and J. Zobel. “Fast on-line index construction by geometric partitioning.” In Proceedings of the 14th ACM international conference on Information and knowledge management, 783. ACM, 2005.
(note: to get the cited equation for Lester et al., substitute in the disk access formula on page 5 into the value of r for fixed p on page 6.)

Madison Vegan Fest

We're organizing a vegan fest in Madison. Come join us.

A review of Blindsight

Descartes’ cogito is probably the most famous phrase in all of philosophy: “I think, therefore I am.” Even those who doubt everything else seem to accept the existence of the self. Everyone except Nietzsche of course, who argued that the notion “there is thinking” does not imply “I am thinking” or even "there is a thinker." We say that "there is rain", but this does not imply that "there is a rainer." So why should "there is thinking" imply "there is a thinker?"

For most people, this seems like an argument that only a philosopher would consider. Of course thinking requires a thinker in a way that raining does not require a rainer. It's just obvious.

But to everyone’s shock, science seems to be proving Nietzsche right.

Surely an individual neuron cannot be said to “think”, at least by any standard definition of the term. Yet the collection of neurons which make up our brain do think. So thought must be an emergent process – “I” am not a single person, I am a “corporation”, to use Dennet’s term.

Hofstadter carries this to its logical conclusion in his "ant fugue": an ant is not capable of very complex behavior. Yet an ant colony is capable of extremely complex behavior. Does it not make sense to say that the colony is not just “conscious” but more conscious than an ant? (Can we say that America is more conscious than an American?)

Blindsight asks the question that this review has been leading up to: why does “intelligent life” imply “conscious life?” Why does an ant need consciousness as long as the colony itself is conscious? (Why does an ant need to think at all?)

As you can guess from the fact that this review needs a page-long introduction to contemporary philosophy of mind in order to introduce Blindsight’s leitmotif, this is not science fiction but science fiction. Watts is trained as a biologist, but he references everything from physics to philosophy. This is not the kind of book where aliens are like humans, except with green skin. This is the kind of book you need to read with reference materials on hand.

The frame is standard “first-contact” fare: something strange is detected at the edge of the solar system. Humans investigate. New things are discovered. But the content is terrifically new. Asimov appealed to L’homme Machine as evidence that robots could be created. Watts takes it to mean that humans are robots, we just don’t break down frequently enough to see it. So he calls upon his fictional universe to throw monkey wrenches into the gears of man-as-robot.

One of the most memorable monkey-wrenches gives the book its title: damage to the brain can cause people to lose their awareness of sight, without actually losing sight itself. If you throw them a ball, they can catch it. If you ask them how they knew where the ball was, they will have no idea. Hence, “blind” sight. This is, in essence, a short summary of the entire book: just because you can do something doesn’t imply that you are aware (or even that there is one who is aware). The book is sort of like a Russian doll of Searle's chinese room; each piece is an un-comprehending piece of a larger picture.

It is tempting to call Blindsight a horror novel, but that is misleading. The horror is not that aliens are coming to kill you. The horror is that you were never alive to begin with.

The conclusion is final yet unsatisfying, which is of course the moral; this book almost requires a new branch of philosophy, a post-structural existential angst: it is not that you live and die for no reason, it’s that there is no “you” at all.

I suspect that if you've read this far, you will know whether you will like Blindsight or not. If you enjoy Oliver Sacks’ investigations into mental curiosities, or Dennet’s investigations into the self, you will probably love it. If you like your fiction fictional and undemanding, you might want to look elsewhere.

Pidgin notifications for select users

Continuing on my path of using facebook without actually using facebook, I installed pidgin-facebook-chat to integrate facebook chat with my other chats (irc/icq). I now had the problem that I get too many notification messages since it tells me every time a friend logs on or off (which during peak times is once every few seconds).

To fix this, I wrote a script to only notify me when select users log on. You can find the gist of it here.


MARS is an IDE for MAL development, and is immeasurably better than SPIM. It's missing a few of SPIM's macros though; to implement them you can add these macros to your macro file (MARS calls them "pseudo-ops").

Just extract the MARS jar (jars are just renamed .zip files) and edit the file called "PseudoOps.txt". Note that the whitespaces between commands in the macro file are tabs, not spaces.

Note that MARS doesn't allow overloaded macros, so I had to split putc into putc and putci depending on whether you are using a register or an immediate value.

Parsing Facebook's RSS

90% of my facebook updates are either things I don't care about (asteriskville, asterisk wars) or from people I don't care about. With Yahoo's pipes framework it's pretty easy to parse out certain pieces of your feed (but not all - I think most of the API still requires an API key).

Facebook still tries to force you to visit their site though, but you can use a relatively simple regex to parse out the url. See an example pipe here. If you subscribe to this feed, it will just have those links, and clicking on them will take you directly to the location pointed at by the url - you don't have to visit facebook first.

Will going vegetarian decrease poverty?

Frequently in discussions on vegetarianism, someone will claim something along the lines of:
1. We're feeding food to animals who we then eat
2. By the second law of thermodynamics this means we're wasting food
3. Therefore, if we stopped feeding food first to animals, there would be more food and we could feed the poor better
Someone usually then responds with something like:
There is not a fixed quantity of food produced; if we stopped eating animals we would simply produce less food, and the poor would be as hungry as before.
It is not immediately obvious who is correct. We need to know two things:
  1. Will food prices fall as a result of going vegetarian?
  2. Will this cause the poor to eat better?

The first question is easily answered. A fall in demand results in a fall in price (see picture: a decrease in demand from D1 to D2 results in a decrease in price from P1 to P2).

However, the mere fact that food is cheaper does not mean it is more available. Given that many poor people sell food as their major source of income, a decrease in food prices means a decrease in their wages. The question we now need to ask is: do wages decrease faster than food prices?

The answer is no[1]:
Even though many rural households gain from higher food prices, the overall impact on poverty [of high food prices] remains negative.
So going vegetarian will help decrease poverty.

1. Ivanic, M., and W. Martin. “Implications of higher global food prices for poverty in low-income countries.” Policy Research Working Paper 4594 (2008): 405-16.

Credits: picture is a modified version of this

RapidMiner and TTests

RapidMiner doesn't come with a way to test the probability that the difference in two groups' attributes is statistically significant. (The operator they have called "T-Test" actually does an F-Test and compares the performance of two models, not two groups of data.)

I have created an operator that uses Welch's T-Test to help with this. See the code at github. I've also attached a screenshot showing it in action; this one is looking at sonar data and determining the probability that sonar differences between rocks and mines are significant.

Vegetarianism and intelligence

Higher IQ at age 10 years was associated with an increased likelihood of being vegetarian at age 30... IQ remained a statistically significant predictor of being vegetarian as an adult after adjustment for social class (both in childhood and currently), academic or vocational qualifications, and sex
- IQ in childhood and vegetarianism in adulthood
Kanazawa claims it's because the purpose of intelligence is to respond to new experiences; vegetarianism being evolutionarily "new" implies that only smarter people will accept it.

The immorality of Javascript's "this"

I have a love/hate relationship with Javascript's "this." On the one hand, it can cause a whole page to fail by moving an anonymous function into a normal method. On the other, closures (especially in stuff like jQuery) are a humongous pain without it.

The major problem is that any function referencing "this" runs differently in different contexts. $('div').each(function() { alert(this); }) is different than $(document).ready(function(){ alert(this); }). That being said, the little .each() is pretty slick.

The fundamental theorem of universal ethics is that ethics have to be just that - universal. As a universal function, it cannot run differently in different contexts. There can be no "if user == this" branch, nor a "if user.skinColor == this.skinColor" switch. In fact, there can be no reference to "this" at all.

And that is why Javascript's "this" is immoral.

MIPS syntax highlighting

There aren't a lot of IDEs for MIPS assembly (shocking, right?) I made an xml for gtksourceview to do syntax highlighting for MAL; you can find it on github. This will work for any editor which allows gtksourceview (e.g. gedit).

You will need to move it to the appropriate folder, probably /usr/share/gtksourceview-2.0/language-specs.

The Problems of Philosophy - The Value of Philosophy

Russell ends with a brief chapter on how philosophy helps one to find truth etc. etc. I will take a different tact.

We saw that it is fruitful to only talk about the abstraction layer. I say that I have a collection of things, what you should take from that is not any insight into what the things in themselves are, but rather that this collection has a size, things can be added to or removed from it, etc.

Now, when I have my collection, you know that it has a size, but you don't know how that size is calculated. Maybe I keep a running count as things are added and removed, or maybe I count the number of objects in my collection every time you ask. Furthermore, you don't know anything else besides the fact that it is a collection, and you don't need to - you can iterate over a list of people as easily as over a list of rocks.

The fundamental question in ethics is, then, what things do you have to implement before we can say that you have moral worth? Must you have basic reasoning skills, addition and so forth? Then many children and television networks have no moral status. Do you just need human DNA? Then not only is abortion immoral, but the loss of skin cells is a tragedy.

Polymath Jeremy Bentham gave a great example of separation of concerns when he said: "The question is not, 'Can they reason?' nor, 'Can they talk?' but rather, 'Can they suffer?'" You may implement any number of interfaces: you can read, talk, reason etc. - this is irrelevant to the ethicist. If I am writing an ethics evaluator, all I need to know is whether you can suffer.

public EthicsEvaluator(WhiteHumanMale H) { }

public EthicsEvaluator(ISufferable H) { }

Bentham used this separation of concerns to argue for the rights of children, women and the abolition of slavery, among other things. When someone would object that women or slaves weren't as smart as white men he could respond, "So what?" As long as a slave was feeling pain from his slavery, then the slavery was immoral.

Peter Singer used the concept of interfaces to argue that some non-humans have moral status. He argued that there is no criterion which all humans, and only humans, have. Therefore, either some humans must not have moral status, or some non-humans must have moral status.

For example, you can argue that only humans have the ability for higher mathematics, such as calculus. This may be true, but young children and the mentally handicapped (much less those in a coma) don't have the ability to do higher math; thus any interface requiring an ability to do calculus would leave them out in the cold. Given that we are against killing children willy-nilly, the ability to do math must not be part of the interface. This can be repeated with any of the "higher" reasoning skills, until we are forced with the conclusion that many animals (at least most vertebrates) have moral status.

This argument has been very successful (Wikipedia says "there is little criticism against the argument although it was first put forward in the third century AD."); you can't throw a stone in a philosophy department without hitting a vegan.

An interface for ethical personhood (i.e. the criteria for a being to have moral status) was developed by Mary Anne Warren. Her requirements are the following:

The being must have some subset of the following:
1. Consciousness (capacity to feel pain)
2. Reasoning (ability to solve new problems)
3. Self-motivated activity (not motivated solely through genetics)
4. The capacity to communicate messages on indefinitely many concepts
5. The presence of self-concepts and self-awareness

She argued that foetuses cannot be said to have any of these things, and therefore abortion cannot be immoral. We know that many animals can count, dolphins, pigs and magpies can recognize themselves in the mirror while young humans can't[1], and many animals like pigs can solve problems like opening a box to get food. Accepting Warren's criteria then would require us to give fewer rights to certain humans than we currently do, and more rights to non-humans than we do now.

Which interface we accept has great impact on how we live our lives; thus, the question of "Where's the WSDL?" while seemingly esoteric, is of utmost importance.

The Problems of Philosophy - Epistemology

This corresponds to the second section of Russell's book, or approximately chapters 5 - 14.

Those who have dabbled with machine-learning are familiar with bayesian inference - essentially, if X and Y are found to occur together, then X and Y are probably related. The fact that I have 10k spam emails and 0 real emails with "v14gr4" in the subject probably indicates that the next email with "v14gr4" in the subject is also spam.

But what if it turns out that I really am friends with the prince of Nigeria, and he really does want to give me forty million dollars in exchange for letting him use my bank account?

In order for us to evaluate this possibility, we need to know the probability distribution. There are far more people who pretend to be the prince of Nigeria than there actually are princes of Nigeria (there are, in fact, no princes of Nigeria). Therefore, we can make a pretty good guess that any email claiming to be from a prince is a scam. But how frequently are there chairs masquerading as rocks?

We might argue:
1. Things tend to look like what they are
2. This looks like a chair
3. Therefore this probably is a chair

But premise (1) here is unsupported - we have no idea how frequently things tend to look like what they are.

What we do instead is to talk about the abstraction layer - the interface of the chair, if you will (abstraction layers are referred to as universals in philosophy). We change our argument to state:

1. If something looks like a chair, then it probably implements ISittable
2. This looks like a chair
3. Therefore it probably implements ISittable

I can then sit on it. We now avoid the question of what the chair actually is in a noumenal sense, but instead ask about the phenomenon (i.e. the public API vs. the private implementation). Perhaps when I sit on what I think is a chair it is really a rock, but as long as it's a comfy rock, who cares?

This is the pragmatist's resolution to our ontological problem - perhaps when I ask for a quicksort you really do a mergesort, but the important part is that you implement ISorter. So problem solved, right? Unfortunately, we have now traded in one problem for another.

I claim that all chairs implement the ISittable interface. We are left with some questions:
  1. What is an interface?
  2. Where does an interface live?
  3. How does one implement an interface?
  4. Are certain interfaces more "real" than others?
In short: where's the WSDL?

One of the most fundamental interfaces is Collection. Java specifies the following:

public interface Collection extends Iterable {
int size();
Iterator iterator();
// etc..

I have one stone, then I add another stone. I now have a collection of stones. But who implemented the interface? I can see that size() = 2, and I can iterate through the stones; does this mean that I implemented the interface? But then why is it that anyone else who looks at this collection will also see that size() = 2? Is there some repository of interface implementations which we all have access to? Furthermore, why is it that not only does one stone plus one stone equals two stones, but one planet plus one planet equals two planets and so far up (and down) the scale?

There are certain propositions like "x == x" which seem to never be violated. In Java, all classes inherit from object - is there an analogous requirement for interfaces that they must implement these "a priori" assumptions? If so, how do we determine which assumptions are always true, which are sometimes true, and which are never true?

There are three basic answers to these questions:
  1. Interfaces are real. They exist in some realm of interfaces which we have access to through unspecified means. This is known as realism.
  2. Interfaces don't exist. It is only through human stupidity that we think they do. This is known as nominalism.
  3. Interfaces exist, but only as ideas. This is known as idealism.

The Problems of Philosophy - Ontology

In this post, I hope to give an overview of the first section of Russell's "The Problems of Philosophy" with a view towards programming. Keep in mind that this book is about problems, rather than solutions. This post contains a summary of the first four chapters or so.

"Is there any knowledge in the world which is so certain no reasonable man could doubt it?" Consider the following:

public List QuickSort(List list){
return MergeSort(list);

We can make the statement "It appears to me that the method performs a quicksort" as this is what the declaration claims to do. However, the method actually performs a merge sort. Thus, we can see that the statement "It appears to me that X is true" does not imply "X is true". In addition, we can see the division between the public signature (sometimes called the "phenomenon" or "qualia") versus the private implementation (the "noumenon" or "thing-in-itself").

The fundamental question of the first chapter is: "Given that I may only know the public signature, how can I find the actual implementation?" The initial response is to say that the public signature is the "reality", but as we saw, we can claim to be doing a quicksort when we're really doing a merge sort. Both run in log n time, and both are stable. There is no public-facing difference between the two.

The pragmatist's response is: perhaps it is doing a quicksort and perhaps a merge, but it gets us the correct answer and that is as good as the "truth." This response feels unsatisfactory; maybe for all practical purposes quicksort and merge sort are the same, but we didn't ask "What is true for all practical purposes?" we asked, "What is true?" It appears there should be some fact of the matter as to what kind of sort it is doing, even if this truth is unobtainable to us.

The unobtainability of Truth with a capital T leads some to reject the question altogether - there simply is no fact of the matter as to whether it is doing a quicksort or a merge sort, all that is true is that we recieve the impression of it sorting. These impressions are not results of what it actually does, the impressions are what it actually does. This too feels unsatisfying.

Russell defines "Idealism" as "The doctrine that whatever exists, or at any rate whatever can be known to exist, must be in some sense mental." Consider the quicksort: we generate a pivot, move stuff to the left and right, then recurse. The pivot, "left," "right" etc. are all ideas - this method takes one idea (the list) applies some other ideas (pivots, recursion etc.) and returns yet another idea (a sorted list). At no point is the inclusion of matter necessary in this, and Occam's razor tells us to discard unnecessary hypotheses. The materialist can respond that matter works equally well here: there are some electrons moving around, they move around in some different ways and then we're done - the inclusion of "ideas" is unnecessary.

Russell concludes here that, while one cannot "prove" the existence of a private implementation, it seems incredibly likely to exist given, for example, the fact that it still has effects even while you aren't looking at it.