Friday, May 25, 2012

Why do Anthropic arguments work?

See "Meaning of Probability in an MWI"

Anthropic arguments set the subjective probability of a type of observation equal to the fraction of such observations within a reference class. This is what I use for "effective probabilities" in the MWI after a split has occurred (the 'Reflection Argument' in the previous post).

There is sometimes some confusion and controversy about such arguments, so I will go into more detail here about how and why the argument works.

The anthropic 'probability' of an observation is equal to what the probability would be of obtaining that observation if an observation is randomly chosen.

Does this imply that a random selection is being assumed? Is it implied that there some non-deterministic process in which observers are randomly placed among the possible observations?

No! I am not assuming any kind of randomness at all. All that I am doing is using a general procedure – known as anthropic reasoning - to maximize the amount* of consciousness that guesses correctly the overall situation.

Suppose that, prior to making any observations, X is thought 50% likely to be true. If X is true then one person sees red and nine people see blue. If X is false, then nine people see red and one person sees blue. If you see red, you should think that X is probably false, with 90% confidence.

If people always follow this advice, then in cases like this, 90% of the people will be right. True, 10% will be wrong, but it’s the best we can do. The given confidence level should be used for betting and/or used as a prior probability for taking into account additional evidence using Bayes' theorem.

That is why the “effective probability” is proportional to the number of people or amount of consciousness; it is not because of some kind of ‘random’ selection process.

The next point is that "number of people" is not always the right thing to use for the anthropic effective probabilities. In fact, it only works as an approximation, and only in classical mechanics even then. The reason is that the amount of consciousness is not always the same for each "person". This is especially true if we consider effective probabilities in quantum mechanics, which are proportional to squared amplitude of the branch of the wavefunction. In such a case, we must set effective probabilities proportional to "amount of consciousness" which is a generalization of the idea of "number of people". I call this amount "measure of consciousness" (MOC) or "measure".

Note: In my interpretation of QM - the many computations interpretation - I do assume that the measure is proportional to the number of implementations of the computation, which can be thought of as the number of observers. However, many of the points I make in posts here do not rely on that interpretation, so the more general concept of measure is generally used.

There is no reason not to apply the same kind of reasoning to cases in which time is involved: In such cases, this maximizes the fraction* of consciousness which is associated with correct guesses. In a large enough population (which is certainly the case with the MWI), this is the same as maximizing the amount of consciousness associated with correct guesses at a given global time.

With all of this talk about consciousness, am I assuming any particular hypothesis about what consciousness is? No, I am not.

What about eliminativism - the idea that consciousness as it is commonly understood does not really exist? That's no problem either! I am just using consciousness as a way to talk about the thing that observers do when they observe. Even the most radical eliminativist does not deny that there is something about the brain that is related to observational processes; whatever that is, more people would have more of it.

Rather than "consciousness", perhaps it would be more precise to talk about "observations" or "queries". Remember, effective probability maximizes the fraction of correct answers; this implies that queries are being made. What about the quantum case, in which the "amount of queries" is proportional to squared amplitude? To make sense of this in an eliminativist view, it may be necessary to take a computationalist view, and let the "amount of queries" be the number of implementations of an appropriate computation. On the other hand, for a dualist, the effective probability should be set proportional to the amount of consciousness that sees a given "query".

Given these different philosophies, without implying any position on whether "consciousness" really exists or not, I will continue to use the term "amount of consciousness" to stand for whatever the quantity of interest is that generalizes the notion of "number of people" to give anthropic effective probabilities.

* When considering the consequences of a fixed model of reality, there is no difference between maximizing the amount of people who guess correctly as opposed to maximizing the fraction of people who guess correctly. However, if different hypotheses which predict different amounts of people are compared, there is a difference. This is closely tied to the philosophical arguments known as the Sleeping Beauty Problem and the Doomsday Argument. I discuss this important topic in the following post.

Featured Post

Why MWI?

Before getting into the details of the problems facing the Many-Worlds Interpretation (MWI), it's a good idea to explain why I believe t...

Followers