How Conscious is my Relationship?

One of the most interesting theories of consciousness is Integrated Information Theory (IIT), proposed by Giulio Tononi. One of its more radical claims is that consciousness is a spectrum, and that virtually everything in the universe from the smallest atom to the largest galaxy has at least some amount of consciousness.

Whatever criticisms one can make of IIT, the fact that it allows you to sit down and calculate how conscious a system is represents a fundamental advance in psychology. Since people say that good communication is the most important part of a relationship, and since any information-bearing system's consciousness can be calculated with IIT, I thought it would be fun to calculate how conscious Gina and my's relationship is.

A Crash Course on Information

Entropy
The fundamental measure of information is surprise. The news could be filled with stories about how gravity remains constant, the sun rose from the east instead of the west and the moon continues to orbit the earth, but there is essentially zero surprise in these stories, and hence no information. If the moon were to escape earth's orbit we would all be shocked, and hence get a lot of information from this.

Written words have information too. If I forget to type the last letter of this phras, you can probably still guess it, meaning that trailing 'e' carries little surprise/information. Claude Shannon, founder of information theory, did precisely this experiment, covering up parts of words and seeing how well one could guess the remainder. (English has around 1 bit of information per letter, for the record.)

Whatever you're dealing with the important part to remember is that "surprise" is when a low-probability event occurs, and that "information" is proportional to "surprise". Systems which can be predicted very well in advance, such as whether the sun rises from the east or the west, have very low surprise on average. Those which cannot be predicted, such as the toss of a coin, have much more surprising outcomes. (Maximally surprising probability distributions are those where every event is equally likely.) The measure of how surprising a system is (and hence how much information the system has) was named Entropy by Shannon based on von Neumann's advice that "no one knows what entropy really is, so in a debate you will always have the advantage".

Divergence
Someone who knows modern English will have a bit more surprise than usual upon reading Shakespeare - words starting with "th" will end in "ou" more often than one would expect, but overall it's not too bad. Chaucer's Canterbury tales one can struggle through with difficulty, and Caedmon (the oldest known English poem) is so unfamiliar the letters are essentially unpredictable:
nu scylun hergan hefaenricaes uard
metudæs maecti end his modgidanc
uerc uuldurfadur swe he uundra gihwaes
eci dryctin or astelidæ
- first four lines of Caedmon. Yes, this is considered "English".
If we approximate the frequency of letters in Shakespeare based on our knowledge of modern English we won't get it too wrong (i.e. we won't frequently be surprised). But our approximation of Caedmon from modern English is horrific - we're surprised that 'u' is followed by 'u' in "uundra" and that 'd' is followed by 'æ' in "astelidæ".

Since you can make a good estimate of letter's frequencies in Shakespeare based on modern English, that means Shakespearean English and modern English have a low divergence. The fact that we're so frequently described when reading Caedmon means that the probability distribution there is highly divergent from modern English.

Consciousness

Believe it or not, Entropy and Divergence are the tools we need to calculate a system's consciousness. Roughly, we want to approximate a system's behavior by assuming that its constituent parts behave independently. The worse that approximation is, the more "integrated" we say the system is. Knowing that, we can derive its Phi, the measure of its consciousness.

Our Relationship as a Conscious Being

Here is a completely unscientific measure of mine and Gina's behavior over the last day or so:

The (i,j) entry is the fraction of time that I was doing activity i and Gina was doing activity j. (The marginal distributions are written, appropriately enough, in the margins.)

You can see that my entropy is 1.49 bits, while Gina (being the unpredictable radical she is) has 1.69 bits. This means that our lives are slightly less surprising than the result of two coin tosses (I can hear the tabloids knocking already).

However, our behavior is highly integrated: like many couples in which one person is loud and the other is a light sleeper, we're awake at the same time, and our shared hatred of driving means we only travel to see friends as a pair. Here's how it would look if we didn't coordinate our actions (i.e. assuming independence):

The divergence between these two distributions is our relationship's consciousness (Phi). Some not-terribly-interesting computations show that Phi = 1.49 bits.

The Pauli exclusion principle tells us that electrons in the innermost shell have 1 bit of consciousness (i.e. Phi = 1), meaning that our relationship is about as sentient as the average helium atom. So if we do decide to break up, the murder of our relationship won't be much of a crime.

Side Notes

Obviously this is a little tongue-in-cheek, but one important thing you might wonder is why my decision to consider our relationship to have two components (me and Gina) is the correct one. Wouldn't it be better to assume that there are 200 billion elements (one for each neuron in our brains) or even 1028 (one for each atom in our bodies)?

The answer is that yes, that would be better (apart from the obvious computational difficulties). IIT says that consciousness occurs at the level of the system with the highest value of Phi, so if we performed the computation correctly, we would of course find that it's Gina and myself who are conscious, not our relationship, since we have higher values of Phi.

(The commitment-phobic will notice a downside to this principle: if your relationship becomes so complex and integrated that its value of Phi exceeds your own, you and your partner would lose individual consciousness and become one joint entity!)

I should also note that I've discussed IIT's description of the quantity of consciousness, but not its definition of quality of consciousness.

Conclusion

Our beliefs about consciousness are so contradictory it's impossible for any rigorous theory to support them all, and IIT does not disappoint on the "surprising conclusions" front. But some of its predictions have been confirmed by evidence (the areas of the brain with highest values of Phi are more linked to phenomenal consciousness, for example) and the fact that it can even make empirical predictions makes it an important step forward. I'll close with Tononi's description of how IIT changes our perspective on physics:
We are by now used to considering the universe as a vast empty space that contains enormous conglomerations of mass, charge, and energy—giant bright entities (where brightness reflects energy or mass) from planets to stars to galaxies. In this view (that is, in terms of mass, charge, or energy), each of us constitutes an extremely small, dim portion of what exists—indeed, hardly more than a speck of dust.

However, if consciousness (i.e., integrated information) exists as a fundamental property, an equally valid view of the universe is this: a vast empty space that contains mostly nothing, and occasionally just specks of integrated information (Φ)—mere dust, indeed—even there where the mass-charge–energy perspective reveals huge conglomerates. On the other hand, one small corner of the known universe contains a remarkable concentration of extremely bright entities (where brightness reflects high Φ), orders of magnitude brighter than anything around them. Each bright “Φ-star” is the main complex of an individual human being (and most likely, of individual animals). I argue that such Φ-centric view is at least as valid as that of a universe dominated by mass, charge, and energy. In fact, it may be more valid, since to be highly conscious (to have high Φ) implies that there is something it is like to be you, whereas if you just have high mass, charge, or energy, there may be little or nothing it is like to be you. From this standpoint, it would seem that entities with high Φ exist in a stronger sense than entities of high mass.

Acknowledgements

The idea for this post came from Brian's essay on Suffering Subroutines, and the basis for my description of IIT came from Tononi's Consciousness as Integrated Information: a Provisional Manifesto. Gina read an earlier draft of this post.

2 comments:

  1. Thanks for this post! It seems like your definition of Phi is just mutual information, right? But the formula in the picture here seems more complicated.

    Mutual information is not the obviously best measure. For instance, one could also use a chi-square test statistic.

    What we consider to be "consciousness" is ultimately up to us, and intuitively, I prefer a description that's "active" in the sense of being an algorithm rather than passive in the sense of referring to aggregate statistics. Still, maybe both could have some relevance in my caring-about function, and it's good that Phi predicts brain regions associated with consciousness, though I presume that many alternate measures could do the same. More thought on Phi here.

    ReplyDelete
  2. It's a nice proxy, but how dependent is it on the graining of your actions? That is, if you think about it some more and there are actually two types of partying you did, and you and Gina chose non-independently between them, does that recalculation make Phi go up? (Answer: yes.)

    Of course, this issue is endemic to all calculations of entropy (and it's the same sort of problem as the length-of-a-coastline 'paradox'). In practice, if you have a consistent graining of reality, then the relative values of Phi might be meaningful...

    ReplyDelete