Thursday, March 28, 2013

A presentist argument for the Principle of Sufficient Reason, well sort of

  1. (Premise) If presentism is true, reality is present reality.
  2. (Premise) All contingent facts about present reality have explanations in terms of how the world used to be.
  3. So, all contingent facts about present reality have explanations. (By (2))
  4. So, if presentism is true, all contingent facts have explanations. (By (1) and (3))
Premise (2) is the controversial one. But it might be accepted even by someone who thinks the world came into existence causelessly, since present reality would still have an explanation.

It is, of course, unclear that the presentist can deliver on (4), since it is not clear that the presentist really can explain things in terms of how the world used to be. That depends on how well the grounding and causality objections can be answered by the presentist.

The principle of indifference and paradoxical decomposition

There are many paradoxes for the Principle of Indifference. Here's yet another. The Hausdorff Paradox tells us that (given the Axiom of Choice) we can break up (the surface of) a sphere into four disjoint subsets A, B, C and D, such that (a) D is countable, and (b) each of A, B, C and BC is rotationally equivalent. This of course leads to yet another paradox for the Principle of Indifference. Suppose our only information is that some point lies on the surface of a sphere. By classical probability, we should assign probability one to ABC (and even if that's disputed, because of worries about measure zero stuff, the argument only needs that we should assign a positive probability). By Indifference, we should assign equal probability to rotationally equivalent sets. Therefore, since P(ABC)=1, we must have P(A)=P(B)=P(C)=1/3. But by another application of Indifference, P(BC)=P(A). So, P(B)=P(C)=P(BC)=1/3, which is absurd given that B and C are disjoint.

Does this add anything to what we could learn from the other paradoxes for Indifference? I doubt it.

An interesting question in philosophy of probability

I wonder if the following is true:

  1. For any probability measure m on the real numbers, there is a possible world in which there is a process that generates an output quantifiable as a real number and where the chances of outputs precisely correspond to m.
If the answer is positive, then by this, it is possible to have two different physical processes each of which "uniformly", i.e., in a translation-invariant way, "chooses" a number in [0,1], but the chances arising from the two processes will be probabilistically inequivalent. Indeed there will be a pair of subsets A and B of [0,1] such that there is no way of defining a chance that the first process lands in A but there will be a definite chance that the result that the second process does, and vice versa for B.

In the light of all the results like this on extensions of Lebesgue measure, one wonders:

  1. Which extension of Lebesgue measure (or measures derived from Lebesgue measure in standard ways) is the physics of our world governed by?
Or are chances for Lebesgue non-measurable outcomes all undefined?

Wednesday, March 27, 2013

Towards a Thomistic theory of fundamental distributional properties

Recently, various metaphysicians (e.g., Parsons, and Arntzenius and Hawthorne) have tried to give an account of spatially nonuniform properties that would work for extended simples or gunky objects (i.e., ones that have no smallest parts). I think there is an interesting account that has in an important way a Thomistic root, and that's no surprise because Aquinas did not believe that substances had substantial parts, so he faced the problems that people thinking about extended simples face. I will develop a partial account for shape, location and color. The version I will give in moderate detail is Pythagorean, because mathematical objects are involved in physical reality itself. The Pythagorean account is easier to wrap one's mind around. I think it may be possible to use Category Theory to de-Pythagorize the account, but I will only sketch the beginning of that line of thought.

A basic insight Aquinas has is that material objects have a special accident called "dimensive quantity", which accident in turn provides a basis for further accidents, such as color. Moreover, objects normally are located in a place by having their dimensive quantity be located there.

On to the Pythagorean account. Suppose that each extended object O has fundamentally associated with it a manifold G of some appropriate smoothness type (we may in the end want to generalize this, perhaps to a metric space, perhaps a topological space, but let's stick to manifolds for now). This manifold I will call the object's (internal) geometry. The fundamental relation between the object and the manifold that associates the manifold to the object is being geometrized by: the object is geometrized by the manifold. This manifold is a purely mathematical object existing in the Platonic heaven. Nonetheless, which manifold an object is geometrized by significantly affects its nomic interaction with other objects. The shape properties of an object are grounded in the fact that the object is geometrized by such-and-such a manifold.

Next, we need location. There is a fundamental relation between an object O and a function L from the object's geometry G to another mathematical manifold called "spacetime", which relation I will call being located by. The function L describes how the object's geometry is located within spacetime. We can now say that two objects O1 and O2 overlap if and only if there are L1 and L2 such that Oi is located by Li, for i=1,2, and the ranges of the functions L1 and L2 overlap.

Now, let's add some color into the picture. There is an abstract object which is a colorspace. Maybe it's some kind of a three-dimensional manifold. There is a fundamental relation between an object O and a function c from the object's geometry to the colorspace, which we may call being colored by. This function describes the distribution of color over the object's geometry.

This is the Pythagorean version of the view. Now we should de-Pythagorize it. Suppose a fundamental determinable of objects: being geometrized. A maximally specific determinate of being geometrized will be called a geometrization. And then—this is getting sketchier—one makes the geometrizations, and maybe other Platonic things like geometrizations, into a category isomorphic to an appropriate category of manifolds. I don't know what, if any, classical ontological category arrows correspond to. Maybe some kinds of token relations. If we're substantivalists about spacetime, we can suppose a special object, S, the spacetime. And then there is a fundamental relation of being located by between an object O and a morphism L of the category of geometrizations whose domain is O's geometrization and whose codomain is S's geometrization. It's harder to bring colors into the picture. This is far as I got. And even if I finish the de-Pythagorization, I will still want to de-Platonize it.

More generally, the de-Pythagorization proceeds by replacing mathematical objects associated with an object with maximally specific determinates of a determinable that, nonetheless, stand in the same structural relations as the mathematical objects did. Category Theory is a promising way to capture that structural sameness, but it might not be the only way.

Tuesday, March 26, 2013

Uncaused universes

This is a very rough argument, and I do not know if it can be given much precision.

Thesis: We should expect that there be either none or infinitely many causeless universes.

For either it's impossible for there to be a causeless universe or it's not. If it's impossible, there will be no causeless universe. But if it's not impossible, then there is an infinity beyond cardinality (e.g., maybe for every cardinal, there is a possible world with that many photons) of possible causeless universes. But given just how many universes there are—beyond cardinality—if any of them has a real chance of coming into existence, we would expect infinitely many to come into existence.

Absolutely negligible sets

In my last post, I was thinking about arguments for why sets of cardinality less than that of the continuum should be taken to have measure zero. Here's the sort of argument I was thinking about:

  1. The members of a family F of subsets of Rn should be counted as absolutely having measure zero when: (a) no invariant extension of Lebesgue measure assigns a non-zero measure to any of them and (b) every invariant extension of Lebesgue measure has a further invariant extension that assigns zero measure to every member of F.
The sets of cardinality less than that of the continuum satisfy (a) and (b). (The proof of (a) is here. The appendix in my post extends to prove (b) given (a).)

But (1) is actually false. Kharazishvili has introduced the notion of "absolutely negligible sets", which in the Lebesgue case are the sets whose singletons satisfy (a) and (b) (or a stronger condition involving quasi-invariant extensions), and has proved, inter alia, that Rn is a countable union of negligible subsets (see this paper and references therein; the same point applies to the circle R/N and any other solvable uncountable group.) Consequently, we should not accept (1).

There is a philosophical lesson here: Even where there is an apparently very reasonable way to make a nonmeasurable measurable, we can't simultaneously make measurable all the sets that there is a reaonable way of making measurable. And all this stuff on extensions of Lebesgue measure leads to a question that I have not seen addressed in the philosophy literature: Which invariant extension of Borel measure should we take to model cases of things like dart throws and to use as the basis for our physics?

Nonetheless, I think (1) is evidence that the members of F are reasonably to be taken as having measure zero, so something of my previous argument survives.

Monday, March 25, 2013

A natural extension of Lebesgue measure, and Brown's argument against the Continuum Hypothesis

The Lebesgue measure on the reals has the following property: it assigns zero probability to every singleton. This is intuitively what we would expect of a uniform measure of the reals by the following line of thought: If I throw a uniformly distributed dart at the interval [0,1], the probability that I will land on any singleton is infinitely smaller than one, i.e., zero. Here is a generalization of this line of thought: If A is a subset of [0,1] of lower cardinality than [0,1] (i.e., lower cardinality than c, the cardinality of the continuum), then the probability that the dart will land in A is infinitely smaller than one, i.e., zero, since A is intuitively infinitely smaller than [0,1] (the union of infinitely many disjoint copies of A will still be smaller than [0,1], assuming the Axiom of Choice).

One might at this point ask: Does Lebesgue measure respect this intuition? If the Continuum Hypothesis is true, then of course it does. For then all subsets of lower cardinality than [0,1] are countable, and all countable subsets have null measure, since the singletons have null measure. Without the Continuum Hypothesis this may or may not be true. See the references here. However, even without the Continuum Hypothesis, but given Choice, it can be easily shown (see the previous link) that any Lebesgue measurable subset of lower cardinality than the continuum has null measure. But ZFC is consistent with the claim that all subsets of the reals of lower cardinality than the continuum have null measure as well as with the claim that some are non-measurable and hence do not have null measure.

Nonetheless, given Choice, the Lebesgue measure on the reals extends in a very natural way to a translation-invariant measure m* on a σ-algebra F* that contains all subsets of the reals that have cardinality less than that of the continuum. We can call this the lower-cardinality-nulling extension of the Lebesgue measure.

Because of the intuitions in the first paragraph, it seems to me that the lower-cardinality-nulling extension of Lebesgue measure (or some extension thereof) is the measure we should work with for confirmation theoretic purposes where normally Lebesgue measure is used. Also, while right now I don't know if there could be a translation-invariant extension of Lebesgue measure that assigns non-zero measure to some subset of cardinality less than c, it is easy to see that if there is such an extension, then it assigns measure 1 to some subset of [0,1] of lower cardinality, and that is surely intuitively unacceptable for the uniform results of dart throws and the like. Hence, every acceptable translation-invariant extension of Lebesgue measure assigns zero to every set of cardinality less than c, and since there is a translation-invariant such extension, so we have a good intuitive argument in favor of working with such a measure.

If this is right then, then the Brown two-dart argument against the Continuum Hypothesis (see references and helpful critique here) is misguided. For we should take the measure governing dart throws to be a lower-cardinality-nulling extension of Lebesgue measure. And once we do that, then the Brown two-dart argument works just as well without assuming the Continuum Hypothesis. Hence whatever problem it identifies is not specific to the Continuum Hypothesis.

Appendix: Construction of the lower-cardinality-nulling extension of the Lebesgue measure. Let F* be the σ algebra consisting of all the subsets of R that differ from a member F by a set whose cardinality is less than c. Suppose A is a member of F*. Then A can be represented as (UN1)−N2 where U is in F* and N1 and N2 have cardinality less than c. Let m*(A)=m(U). To see that m* is well-defined, suppose that (UN1)−N2=(VM1)−M2, where V is in F and M1 and M2 have cardinality less than c. Then U and V differ by sets of cardinality less than c, and hence their symmetric difference has null Lebesgue measure, and so m(U)=m(V), and m*(A) is well-defined. Now a countable union of sets An of cardinality less than c has cardinality less than c. This follows from the fact that (using the Axiom of Choice) c has uncountable cofinality, so that there is a cardinality K such that |An|≤K<c for all n, and hence, again by Choice, the union of the An must have cardinality at most K, if K is infinite, and at most countable if K is finite. Since F* is clearly closed under complements, it's a σ-algebra. To check that m* is a measure, we need only check it's countably additive. This easily follows once again from the fact that a countable union of sets of cardinality less than c has cardinality less than c. And translation invariance is obvious.

The above argument only uses the claim that measurable sets of cardinality less than c are null. This is true for n-dimensional Lebesgue measure. (Proof: Suppose that A in Rn is a bounded measurable set of cardinality less than c. Let A' be a projection of A onto any one axis. Then A' is a measurable one-dimensional set of cardinality less than c, and hence null. For large enough L, A will be a subset of the cartesian product of A' and an (n−1)-dimensional ball of radius L, and so A will also be null. But if all bounded measurable sets of cardinality less than c are null, then so are the unbounded ones.) And so the cardinality-nulling extension works in n-dimensions as well.

Saturday, March 23, 2013

A Copenhagen story about the problem of suffering before human sin

It sure looks like there was a lot of suffering in the animal world prior to the advent of humanity, and hence before any sins of humanity. Yet it would be attractive, both theologically (at least for Christians) and philosophically, if one could say that evil entered the physical world through the free choices of persons. One could invoke the idea of angels who fell before the evolutionary process got started and who screwed things up (that might even be the right story). Or one could invoke backwards causation (Hud Hudson's hypertime story does something like that). Here I want to explore another story. I don't believe the story I will give is true. But the story is compatible with our observations outside of Revelation, does not lead to widespread scepticism, and is motivated in terms of an interpretation of quantum mechanics that has been influential.

Begin with this observation. If the non-epistemic Copenhagen interpretation (NECI) of Quantum Mechanics is literally true, then before there were observers, there was no earth and hence no life on earth. Given indeterministic events in the history of the universe, the world existed in a giant superposition between an earth and a no-earth state. The Milky Way Galaxy may not have even existed then, but instead there was a giant superposition between Milky-Way and no-Milky-Way states. And then an observation collapsed this giant superposition in favor of the sort of Solar System and Milky Way that we observe. There are difficult details to spell out here, which we can talk about in the discussion. But note that the story predicts that we will have astronomical evidence of the Milky Way existing long before there were observers on earth, even though perhaps it didn't--perhaps there was just the giant superposition. For when such a superposition collapses, it leaves evidence as of the remaining branch having been there for a long time earlier.

Now to make this a defense of the idea that suffering in the animal world entered through human sin, I need a few assumptions beyond the above plain NECI story:

  1. the observations that collapse the wavefunction are observations by intelligent embodied observers
  2. quantum states only come to be substrates of conscious states when the wavefunction is strongly concentrated on them (think of a very narrow Gaussian)
  3. prior to there being humans on earth, there were no highly concentrated quantum states of the sort that would be substrates of conscious states
  4. humans were the first embodied intelligent observers of the earth (or of other stuff relevantly entangled with it)
  5. God set up special laws of nature such that if humans were never to make wrong choices, no wavefunctions would ever collapse into the substrates of painful states.
  6. optional but theologically and philosophically attractive: the unsuperposed existence of humans comes from a special divinely-wrought collapse of the wavefunction (this would solve one problem with NECI, namely how the first observation was made, given that on plain NECI before the first observation there was a superposition of observer and no-observer states before it; it would also help reconcile creation and evolution)

One might even connect the giant superposition with the formless and void state mentioned in the Book of Genesis, though I do not particularly recommend this exegesis and I don't believe the story I am giving is in fact true (and I am mildly inclined to think it false).

Objection 1: The story makes standard paleontological and geological claims literally false. There never were any dinosaurs or trilobites, just a giant superposition with dinosaur- and trilobite-states in one component.

Response: So does the plain NECI story, without any of my supplements such as that it is intelligent observation that collapses the wavefunction. And just like the plain NECI story, my extended story explains why have the evidence we do.

Objection 2: Like the worst of the young-earth creationist stories, this story involves a massive divine deception.

Response: Not at all. Consider Descartes' attractive idea that what we expect from God is not that we would always get science right, but that we would be capable of scientifically correcting our mistakes. And the discovery of quantum mechanics, with the invention of the NECI interpretation, came within a century of Darwin's work. As soon as we had quantum mechanics with the NECI interpretation, we had good reason to doubt whether prior to the existence of observers there was an earth simpliciter or just an earth-component in a giant superposition.

Objection 3: There are better interpretations of quantum mechanics than NECI.

Response: Weighing the pros and cons of an interpretation of quantum mechanics requires weighing all its costs and benefits. This will include weighing the theological benefits of this interpretation, given the evidence that there is a God.

Variant: If we want, we can reinterpret the paleontological and geological claims about how things were before observers as relativized to a component of the wavefunction, while exempting consciousness from this relativization--only where there are highly concentrated states is there consciousness. The Everett interpretation basically does this relativization for all claims. The present relativization is, I think, less problematic than the Everett one. First, it doesn't branch intelligent agents or conscious states in the way the Everett interpretation does, a branching that generates the severely counterintuitive consequences of Everett's theory. Second, I do not think it has the well-known serious philosophical problems with the interpretation of probability that the Everett interpretation suffers from: the probabilistic transitions all happen with intelligent observation, and are objectively chancy transitions with the probabilities being interpreted according to one's favorite view of objective chances.

Final remarks: Why don't I believe this story? Well, for one, I find myself incredulous at it. Second, we know that either quantum mechanics or relativity theory is false, and I see little reason to assign more credence to quantum mechanics. Third, I do want to preserve the claims of the special sciences, like biology and geology, without implausible relativization. Fourth, I am sceptical of (1), the idea that only intelligent observation collapses the wavefunction.

Psychological theories of personal identity and transworld identity

On psychological theories of personal identity, personal identity is constituted by diachronic psychological relations, such as memory or concern. As it stands, the theory is silent on what constitutes transworld identity: what makes person x in world w1 be the same as person y in world w2. But let us think about what could be send in the vein of psychological theories about transworld identity.

Perhaps we could say that x in w1 is the same as y in w2 provided that x and y have the same earliest psychological states. But now sameness of psychological states is either type-sameness or token-sameness. If it's type-sameness, then we get the absurd conclusion that had your earliest psychological states been just like mine, you would have been me. Moreover, it is surely possible to have a world that contains two people who have the same earliest types of psychological states. But those two people aren't identity.

On the other hand, if we are talking of token-sameness, then we seem to get circularity, since the token-sameness of mental states presupposes the identity of the bearers. But there is a way out of that difficulty for naturalists. The naturalist can say that the mental states are constituted by some underlying physical states of a brain or organism. And she can then say that token-sameness of mental states is defined in terms of the token-sameness of the underlying physical states. This leads to the not implausible conclusion that you couldn't have started your existence with a different brain or organism.

But I think any stories in terms of initial psychological states face the serious difficulty that it is surely possible for me to have been raised completely differently and to have had different initial psychological states. This is obvious if the initial psychological states that count are states of me after the development of the sorts of cognitive functions that many (but not me) take to be definitive of personhood: for such functions develop after about one year of age, and surely I could have had a different life at that point.

In fact this line of thought suggests that no psychological-type relation is necessary for transworld identity. But if no psychological-type relation is necessary for transworld identity, why think it's necessary for intraworld identity?

Friday, March 22, 2013

One can't hold to both Humeanism about laws and naturalism about mind

The following is obviously true:

  1. Any world that contains an exact duplicate of the solar system, over all its past, is a world that contains human-type mental states.
If Humeanism about laws is true, then what laws, if any, govern the events in the solar system depends on what happens outside the solar system. But if naturalism about mind is true, then whether there are mental states depends on the laws of nature and the local arrangement of matter. It doesn't just depend on the local arrangement of matter. If one had matter that is locally arranged just as it is in the solar system, but the laws of nature were utterly different, so that the functional facts about that matter would be utterly different, we wouldn't have human-type mental states.

But now we see that Humeanism about laws plus naturalism about mind requires us to deny (1). For we could imagine a world with an exact duplicate of the solar system but where the behavior of stuff outside the solar system is so very different from how it is in our world, that the familiar sorts of laws that are crucial to our mental functioning being as it is do not hold, say, the Pauli exclusion principle is false, though by chance the local behavior in the solar system is as if those laws held, so in our solar system, fermions don't share a state. But without such laws we don't have the kinds of functional interconnections that are involved in human-type mental states.

I suppose the Humean might just deny (1). But now we can make it more ridiculous. On Humean views, what laws there are depends on the future course of the universe. Imagine that we have a universe which is just like ours up to tomorrow, but with an infinite future a day later, where everything goes topsy-turvy. None of the regularities that held up to tomorrow are global regularities, and a fortiori none of them are laws. Therefore, in that world, too, there haven't been any human-type mental states. And that's really absurd. For it means that whether there have been human-type mental states depends on how things will be from the day after tomorrow.

Thursday, March 21, 2013

Redemption in Christ from individual and cosmic sin

There is a danger in seeing Christ's work of redemption as solely focused on one's individual sins. But at the same time the lived experience of Christians is precisely an experience of redemption from individual sins that block union with God and neighbor: "How can I ever be friends with Him, or with him, or with her, or with them, after I did that to Him, or to him, or to her, or to them?" In the context of one's individual repentance, a focus on the sin of the world and the deeply rooted social dimensions of sin, may be a distraction or even an excuse. "It's not my sin, but our sin." And it is not far from our sin to nobody's sin. Adam shifted the blame for his sin onto Eve, and Eve onto the serpent. It would have been no better if they shifted it onto the world.

None of this denies that there are structures of sin that are of cosmic importance, and that the means by which the individual sinner is redeemed is through-and-through ecclesial. But we must also not overestimate the importance of cosmic sin. For there is nothing worse in the world than mortal sins, and it is only individuals who commit those, and thereby separate themselves from God and one another. Moreover, to the true lover, the beloved is a cosmos. And God loves each of us.

Wednesday, March 20, 2013

Fundamentality and ungroundedness

I haven't been following the grounding literature, so this may be old hat, in which case I will be grateful for references.

The following seems pretty plausible:

  1. p is fundamental if and only if p is ungrounded.
But I think (1) may be false. I will put the argument in tensed fashion, but it could also be done a bit more awkwardly in a four-dimensional setting.

Let's suppose that <I ought to respect other persons> is a fundamental moral truth. Call this truth R. But now I validly promise to respect other persons. Then R comes to be grounded in <I ought to keep my promises and I promised to respect other persons>. If (1) is true, then R continues to be true but ceases to be fundamental. That doesn't sound right. It seems to me that if R is ever a fundamental moral truth, then it is always a fundamental moral truth. After I have promised to respect other persons, R gained a ground but lost nothing of its fundamentality.

Maybe I can motivate my intuition a little more. It seems that R has a relevantly different status from the status had by S, the proposition <I ought to come to your house for dinner every night>, after I promise you to come to your house for dinner every night. Each of R and S is grounded by a proposition about promises, but intuitively the fundamentality-and-grounding statuses of R and S are different. A sign (but only a sign--we want to avoid the conditional fallacy) of the difference is that R would still be true were the proposition about promises false. Another sign of the difference is that <I ought to respect you> is overdeterminingly grounded in <I ought to respect all persons> and <I promised to respect all persons and I ought to keep my promises>, while it is false that <I ought to come for dinner tomorrow night> is overdeterminingly grounded in <I ought to come for dinner every night> and <I promised to come for dinner every night and I ought to keep my promises>. The latter is not a case of overdetermination.

The above example is controversial, and I can't think of any noncontroversial ones. But it seems plausible that we should be open to phenomena like the above. Such prima facie possibilities suggest to me that ungroundedness is a negative property, while fundamentality is something positive. Normally, fundamental truths are also ungrounded. But they don't lose their fundamentality if in some world they happen to be grounded as well.

A somewhat tempting way to keep the above intuition while maintaining the idea that fundamentality is to drop the irreflexivity of grounding and say that:

  1. p is fundamental if and only if p grounds p.
Then we could say that R is overdeterminingly grounded by a proposition about promises as well as by R itself, while S is only grounded by a proposition about promises and not by S. And in ordinary language we do sometimes use expressions like "p because p" to express some kind of fundamentality of p. I am not that happy with this solution, but can't think of another one that keeps the idea that fundamentality is defined in terms of grounding. Of course, one could take fundamentality to be fundamental.

Laurie Paul on deciding whether to have children

A colleague alerted me that Laurie Paul has a piece on deciding whether to have children (in a position where one does not have any yet). Paul addresses the model on which you rationally decide whether to have a child by reflecting on "what it will be like for you to have a child of your very own" and argues that this model cannot be used. The neat idea is that having a child is a transformative experience, and just as the person who has never seen color cannot have any idea what it's like to see red, so too cannot have any idea what it will be like to live after this transformative experience until one has undergone it. Therefore, she concludes, standard decision theory cannot be used to decide whether to have a child.

Now, I agree that one shouldn't decide on whether to have a child by reflecting on what that would be like for you. To decide to have a child on the basis of what it will be like for one is to treat the child's very existence as a means to one's ends. This is morally objectionable in the same way that it is morally objectionable to decide to rescue a drowning person on the basis of what it will be like for one to be a rescuer (though of course that certainly beats deciding not to rescue her).

But Paul's transformative-experience argument seems to me to fail in at least two places. First, it is false that one cannot make rational decisions on the basis of what it will be like for one after one has had a transformative experience. Paul herself gives the example of posttraumatic stress as a transformative experience. I have no idea what it would be like to have undergone that, but I certainly can know that it would be terrible to live with posttraumatic stress. I can decide to avoid situations generating posttraumatic stress in a perfectly rational way on the basis of what it would be like to live with posttraumatic stress--namely, on the basis of the fact that it will be nasty. I don't have to know in what way it would be nasty to know that it would be nasty. There are many very nasty things that are transformative, and one can rationally avoid them simply on the basis of common-sense knowledge (typically based on testimony) of their nastiness.

I think Paul will resist this line of thought on the following grounds. The decision whether to avoid posttraumatic stress or not is a no-brainer given what we know about it. Just about everybody who undergoes it agrees it's very nasty, I assume (I haven't checked opinion polls here). But parenthood is much more complex. Typically, it has both nasty and nice components. And one doesn't know how the nasty and the nice will balance out until one has undergone the transformative experience. But I think that once we've agreed that there is no in-principle difficulty in making decisions on the basis of how things will be after a transformative experience, then it can just be a matter of gathering the best information we have on the balance between the nasty and the nice for people of different sorts, and seeing what sort of a person we are.

Second, even if one could not make a decision on the basis of what-it-would-be-like considerations, decision theory could still be used. One could use non-egocentric decision theory, like utilitarian decision theory. But even within egocentric decision theory, one could make a decision on the basis of non-experiential values, like the objective value of being the intentional cause of the great goods of life and upbringing.

My own take on the reasons for having children is rather different however. There is some discussion here.

Monday, March 18, 2013

Presentism and abstracta: Two arguments

Here is an argument against presentism:

  1. If presentism is true, then to exist is to presently exist.
  2. Abstracta exist but do not presently exist.
  3. So, presentism is false.
And here is an argument for presentism:
  1. The number two existed yesterday and the day before yesterday.
  2. Anything that existed yesterday and the day before yesterday persisted.
  3. If growing block or eternalism is true, then persisting objects either have temporal parts or have locational properties.
  4. The number two does not have temporal parts.
  5. The number two does not have locational properties.
  6. If presentism is not true, growing block or eternalism is true.
  7. So, presentism is true.

My own take on the second argument is to distinguish between two senses of "x exists at t". The first sense is that x has tenseless existence-at-t. This we might call the narrow sense. But there is a broader sense of "x exists at t", which is that either x exists timelessly or x has existence-at-t. Ordinary language tends to use the second sense. If we take the broader sense in (4) and (5), I accept (4) (though with a divine conceptualist reduction) and deny (5). If we take the narrower sense, I deny (4) but accept (5).

One Body book: 30% off

I got an email from Notre Dame University Press, that my One Body: An Essay in Christian Sexual Ethics book is available at 30% off and with free shipping from the Press's order page to get the price down to $31.50 from the list price of $45 (I don't set the prices!). You will need the sale promo code NDEME13 to be entered at the shopping cart stage. Other books in their Ethics and Culture and their Medical Ethics series are also on sale with the same code.

I don't know if the discount code applies to their PDF version or only to their paper version.

Saturday, March 16, 2013

A fun game-theoretic paradox

You observe Fred and Sally playing a game repeatedly. The game works as follows. Fred spins a spinner counterclockwise on a circular board that is so calibrated that its final position is uniformly distributed. The board has some angles labeled in degrees, counterclockwise, but the payoff for Sally depends solely on the exact final position of the spinner (so if it is between markings, the exact position between markings matters). You don't know how the payoff depends on the final posiiton of the spinner. But now you notice an oddity. Each time, right after the spinner stops, Sally is not allowed to see where it stopped, but Fred asks her: "Do you want me to rotate the spinner by an extra square-root-of-two degrees counterclockwise?" And Sally says: "Of course."

You ask Sally: "Are you superstitious or trying to make more work for Fred or have you peeked at where the spinner landed?" And Sally says: "Not at all. The payoff structure for the game gives me a good reason to ask for that extra counterclockwise rotation if the chance is offered."

And now you're really puzzled. You know Sally is really smart, and yet you wonder how it could be that an extra fixed rotation on top of what is already a uniformly distributed rotation of the spinner could make any difference.

It turns out that it could. Suppose that the game works as follows. Let S be the set of positions defined by the angles r, 2r, 3r, ... in degrees (with wrapping around, of course, so that 361 degrees defines the same angle as 1 degree), where r is the square root of two. If the pointer ends in S, Sally gets an infinite payoff (the idea of using an infinite payoff to make zero-probability outcomes relevant is Alan Hajek's)—maybe she gets to go to heaven. Otherwise, she gets nothing. In this setup, Sally has a good reason to ask for the extra rotation of r degrees. For if after Fred's initial spin, the pointer is already in S, it will still be in S after that extra rotation (nr+r=(n+1)r). But if Fred's initial spin put the pointer at zero, then the extra rotation by r will put the pointer at r, which is in S. Thus, if Fred's initial spin is not at zero, Sally loses nothing by asking for the extra rotation, and if it is at zero, Sally gains an infinite payoff by asking. Since it could be at zero (though the probability of that is, well, zero), we have domination. If Fred allowed it, Sally would have even better reason to ask for a rotation of 2r degrees, and an even better reason to ask for a rotation of 10100r degrees.

Of course, Sally is all but certain not to get anything. But infinite utilities call for desperate measures, and it costs Sally nothing to agree. (We could suppose it delays the next game. That can be taken care of by ensuring successive games are always at fixed times.)

Friday, March 15, 2013

Infinite sums are not sums

I think that when working on countable additivity, supertasks, infinite utility sequences and the like, it's really important to remember that infinite sums are not sums. Infinite summation is a limiting procedure that goes from an infinite sequence of numbers to a number, satisfying some of the properties of summation. There is nothing absurd about a process where in the first half-second you walk half a meter, in the next quarter-second you walk a quarter of a meter, in the next eighth-second you walk an eighth of a meter--and at the end of the whole second you're a mile away. This is all basically a point from Benacerraf or Thomson.

What this means is that in philosophical contexts where summing up an infinite sequence comes up, one needs to justify the idea that the right way to sum up the sequence is to use this limiting procedure. Sometimes, as in my example of walking, while it's not absurd that you would end up a mile away, you can assume a principle of continuity that gives a more natural answer.

The most risky cases are where the sum is only conditionally convergent, as in Nover and Hajek's Pasadena Game, where with probability 2-n you win -(-2)n/n. If what is being "added up" are things where the ordering does not matter--e.g., utilities--then the idea that you're "adding" is dubious. (This does not affect Nover and Hajek's use of the game, and indeed it's basically their point.)

Tuesday, March 12, 2013

Knowing physics

Stephen Hawking knows physics. But what does that mean?

Is it a case objectual knowledge akin to Churchill knowing Chamberlain or my knowing my left knee? It seems not. For Hawking could know physics without physics as such being an object of his knowledge. For there are two senses of the word "physics".

The first referent of "physics" is a particular discipline. But a discipline is an object of knowledge for a meta-discipline. Thus, physics is an object of knowledge for the sociology or philosophy of physics and while a physicist, of course, can double as a sociologist or philosopher of physics, she need not. We could imagine a superb physicist but rotten philosopher who was simply oblivious to the idea of physics as a discipline, much as in the case of the Frenchman who talked prose all his life without knowing it.

The other sense of the word "physics" is the sense in which one talks of the "physics of the world", i.e., the true physics. But that Hawking knows physics could be true even if his physical theories turned out to be false, so when we say that Hawking knows physics, we don't mean that he knows the physics of the world. Copernicus knew astronomy, though he was fundamentally wrong about the "astronomy of the world"--the planets don't move in circles and there is nothing non-relatively at rest.

Perhaps "physics" is a mass noun here for the propositions of physics. So, when we say Hawking knows physics, we mean he knows enough of physics, i.e., enough of the propositions of physics. But again that can't be right. For it could be that the propositions of physics that Hawking accepts are false and yet Hawking knows physics.

So it seems that when we say that Hawking knows physics we are attributing neither objectual nor propositional knowledge to him. Maybe when we say someone knows physics, we are attributing a mastery of physics. Thus, knowing physics would be like knowing archery or cookery.

Monday, March 11, 2013

Some conjectures on intention, success and trying

Here are some conjectures:

  1. x As out of a proximate intention to A if and only if x succeeds at trying to A.
  2. x proximately intends to A if and only if x tries to A.
  3. x proximately intends that s if and only if x proximately intends to bring it about that s.
  4. x distantly intends to A if and only if x tries to bring it about that she* [quasi-indicator] As
  5. x distantly intends that s if and only if x distantly intends to bring it about that s.
Proximate intentions are the intentions that normally directly result in action, as distinguished from distant intentions which are plans for future action that still require a proximate intention before the action. I am rather less confident of the theses on distant intention than those on proximate intention. But even the theses on proximate intention only have something like the following epistemic status: "They sound right and I can't think of a counterexample."

If (2) is right, then the intention condition in the Principle of Double Effect can be reformulated as saying that the agent isn't trying to bring about an evil. And indeed the following sounds exactly right to me:

  1. You are never permitted to try to bring about an evil.
Of course, there are some difficult de re / de dicto issues in regard to (6).

Saturday, March 9, 2013

An argument from the hiddenness of God against naturalistic evolution

As a number of authors have pointed out, there is good reason to think that evolutionary processes would instill in us a belief in a judge who can see what we do in secret, as a way of motivating our cooperation with one another. But if so, then why isn't belief in such a judge more universal than it is? Why is it possible for us to rid ourselves of that belief?

I don't really mean this as much of a serious argument against naturalistic evolution. But I do want to point out that the problem of divine hiddenness might not be a problem only for theists.

Friday, March 8, 2013

A way to classify and discover virtues

Here is a way to classify virtues, which also leads to a way of discovering virtues. Start by what I will call "general virtues" ("structural virtues" is perhaps better, but the term is already taken). These are excellences that in their specification are neutral between particular human goods like health, friendship, reproduction, understanding, etc. Some examples of general virtues include:

  • Courage: Excellence in risking real loss in respect of a good to self for the sake of a good to self or another.
  • Generosity: Excellence in sacrificing a good to self for the sake of a good to another.
  • Perseverence: Excellence in accepting a temporally extended loss of a good to self for the sake of a good to self or another.
  • Wisdom: Excellence in choosing between incommensurable goods.
  • Moderation: Avoiding excess in respect of goods.
You may well dispute my particular characterizations, and you will be right to do so, but it is the general form that I am most interested in here rather than details. Each of these general virtues talks of one's attitudes to goods, while being neutral on which particular goods these are.

However, it is well-known both at the folk level, and fits with psychological research on the domain-specificity of human excellences, that one can have one of these general virtues in respect of only some goods or pairs of goods, but not in respect of others.

Thus, we have virtues that are specifications of the general virtues. The specifications can be of two sorts: they can be further structural specifications or they can be substantive specifications. For instance, we can structurally specify courage into what one might call:

  • Self-centered courage: Excellence in risking real loss in respect of a good to self for the sake of a good to self.
  • Other-centered courage: Excellence in risking real loss in respect of a good to self for the sake of a good to another.
One might even have more specific structural specifications specifying one's relationship to others, the quantity of others, etc. We can also structurally specify perseverance into things like:
  • Persistence: Excellence in accepting a lengthy temporally extended loss of a minor good to self.
  • Heroic perserverance: Excellence in accepting a temporally extended loss of a major good to self.

But I want to focus instead on substantive specifications. The general and structurally specified virtues are neutral between the kinds of human goods. Substantive specifications are special cases where only particular kinds of human goods are in play. Some examples:

  • Physical courage: Excellence in risking real loss of health to self for the sake of a good to self or other.
  • Social courage: Excellence in risking real loss of social capital to self for the sake of a good to self or other.
  • Physical-social courage: Excellence in risking real loss of health to self for the sake of social capital for self.
  • Chastity, a substantive specification of moderation: Avoiding excess in respect of sexual goods.
  • Political wisdom: Excellence in choosing between incommensurable communal goods.
Notice that such specifications nest. Physical courage still has a structural element: it does not specify which goods to self or other justify the risk. Physical-social courage, which while admirable is not as admirable as some other substantive specifications of courage, specifies the goods for the sake of which the risk is undertaken.

And it could be that there are even narrower virtues, with other kinds of specifications, including contextual ones, like military physical-social courage.

We can thus classify virtues by first finding a general form that is neutral between kinds of goods, with more or less of a structural specification, and then add a substantive specification in terms of the kinds of goods involved, sometimes, as in the case of courage, there being more than one place in the structure where kinds of goods need to be inserted.

This leads to a heuristic that could allow for the discovery of new virtues:

  • Every virtue can be obtained in this way.
This heuristic may have exceptions, though I can't think of any right now. Given the heuristic, a virtue that makes specific reference to a type of good will be a substantive specification of a virtue that does not. And this means that when we have a substantive virtue, like chastity, we should be able to discover other substantive virtues by finding the underlying structure, and substantively filling it out in other ways.

Suppose that, further, one agrees that intellectual virtues are virtues, but ones that concern an epistemic good. Then this means that one will be able to discover new intellectual virtues simply by specifying structural virtues with epistemic goods, and new non-intellectual or not-specifically-intellectual virtues by finding the structures underlying the intellectual virtues.

Two examples. First, from the non-intellectual (or not-specifically-intellectual) direction to the specifically intellectual direction:

  • Intellectual-intellectual courage: Excellence in risking real loss of epistemic goods to self for the sake of epistemic goods to self.
This comes into play when one investigates a matter where one already has well-established epistemic goods, accepting a risk that misleading evidence might cause the loss of these goods. Second, the other way. Start with:
  • Intellectual open-mindedness: Excellence in risking loss of apparent epistemic goods for the sake of real epistemic goods.
And now we get the very interesting virtue:
  • General open-mindedness: Excellence in risking loss of apparent goods for the sake of real goods.
For instance, by relying on an apparent friend, we risk finding out that the friendship is only apparent. But it is a risk worth taking on.

Wednesday, March 6, 2013

A variant on my argument against probability in multiverse scenarios

This is a variant of the argument here. I am not sure which version I like more. This version is lacking one distracting feature of the original, but I kind of like the original more.

You're suddenly informed by an angel that a countably infinite number of people have just each rolled a fair and independent indeterministic die. In the case of each of these people, you should surely assign probability 1/6 that that person rolled six. The angel then adds that infinitely many people rolled six and infinitely many didn't. This doesn't surprise you—after all, that was just what you expected from the Law of Large Numbers.

The angel then adds that he divided up all the die rollers into pairs, one member of each pair having rolled six and the other not. This doesn't seem to be a big deal. After all, you knew that they could be thus divided up as soon as you heard that infinitely many rolled six and infinitely many didn't. The mere fact that they were so divided seems irrelevant.

Finally, the angel transports you to meet each pair of paired die rollers, pair by pair. In each pair, you now know for certain that one person got six and the other didn't. Let's say that at some point you meet Jennifer and Patricia. What probabilities do you assign to each having rolled six? You can't stick to your old assignment of 1/6 to each roller. For if you did that, you'd be violating the probability calculus, since you know for certain that exactly one of the two got a six, so you better have P(Jennifer got six)+P(Patricia got six)=1. But you can't go for any asymmetric assignment either, since your evidence regarding Jennifer and Patricia is exactly symmetric. That leaves you with two options: you must refuse to assign any probability to the probability of Jennifer getting six and to the probability of Patricia getting six, or you must assign probability 1/2 in both cases. And since this reasoning applies to everybody, you'll be assigning 1/2 to everybody or no-probability to everybody.

Assigning 1/2 to everybody is untenable, because you could then re-run the scenario with a different way of partitioning (say, into groups of three, two of whom rolled six, and one who didn't).

So it seems that you just need to refuse to assign probabilities. But from which point on? As soon as you learned that there were infinitely many people rolling independent fair dice, you already knew that some partitioning like the above was possible. Intuitively, you didn't learn anything important at any subsequent step. (You did genuinely learn something when you learned there were infinitely many sixes and infinitely many non-sixes, but since that information had probability 1 before you learned it, it shouldn't have affected your probabilities significantly.)

One way out is to deny the possibility of engaging in probabilistic reasoning in a scenario where there are infinitely many independent copies of an experiment. This way out insists that from the beginning, as soon as we knew about the infinitary aspects of the experiment, we should have refused to assign probabilities. (But I worry: What if the experiments are sequential? Shouldn't it be possible to engage in probabilistic reasoning then? And yet one can make a similar story work in sequential contexts.)

Maybe, though, one can switch from assignments of 1/6 to a no-probability at the last step. When I meet Patricia and Jennifer, I meet a highly biased sample of a pair of die rollers: after all, in an unbiased sample, I would have no guarantee that exactly one of the two rolled six. The bias takes a form that cannot be probabilistically handled by me: I have no probability distribution on ways of pairing individuals that assigns a probability to Patricia and Jennifer being paired together. And where there is bias that cannot be probabilistically handled, I must suspend judgment. This is not a very attractive way out. But maybe it's still the right one? (Notice, though, that if you are told ahead of time that you will be meeting people in pairs like this, then you will violate a plausible generalization of van Fraassen's reflection principle if you initially assign 1/6 and then switch to no-probability.)

Tuesday, March 5, 2013

Exclusion of reasons and purity of heart

Patrick is an attractive man and a good philosopher. His colleague Jenny, who is married to someone else, now has two reasons to talk to Patrick. Here we don't, I think, want to say that the problematic reason is overridden or defeated. For if Jenny's practical reasoning in favor of talking to Patrick includes "Patrick is attractive, but I'm married to someone else," her practical reasoning takes Patrick's attractiveness in favor of talking with him, and that's unfaithful--there is at least a moral imperfection there, even if not a sin. (One can act or reason viciously without actually sinning.) Rather, the reason should be excluded in the Raz sense. Jenny has a second order reason not to let Patrick's attractiveness count even pro tanto or defeasibly in favor of spending time with him, just as when an army commander tells one to take yonder hill, one has a second order reason not to let personal inconvenience count even pro tanto or defeasibly in favor of staying put.

In practice, it's hard to really exclude reasons, we often do not know all our motives, and self-deceit is easy, which is why we have practices of self-recusal when one has strong excluded reasons. It is important enough in some cases not to risk acting on excluded reasons that one removes oneself from a position where one would be making a decision where the excluded reasons are relevant.

Another practice besides recusal is working on one's psychology so that the excluded reason in favor of an action count against the action. Thus, in the above example, that Patrick is attractive would start counting as a defeasible reason against talking with him for Jenny. When she does end up talking to him, it will perhaps no longer be partly because of his attractiveness, but despite it. Such a practice of overcompensation is not abstractly ideal--a morally perfect being would have no need for it. Moreover, in some cases the equivalent of recusal is more appropriate: a judge should not overcompensate for the fact that she'll benefit monetarily from a verdict by biasing herself against that verdict, but should recuse herself. But in some cases, overcompensation seems an appropriate solution.

However, human capabilities for self-deceit are enormous, and one cannot count on overcompensation to do the trick by itself. For it had better not be the case that Jenny is in fact consciously or unconsciously weighing three reasons in favor of talking to Patrick: the philosophical reason for talking with Patrick, the attractiveness reason in favor of talking with Patrick and the attractiveness reason against talking with Patrick, even if the third reason overcompensates for the second. For if she is acting on these three reasons, she is taking Patrick's attractiveness in favor of spending time with him. But perhaps a habit of overcompensation can give rise to genuine exclusion of the excluded reasons.

Kantian reflections on making tools less useful

It has often struck me as perverse to be putting effort into making products one sells be less flexible and useful. There is a good Kantian reason against such efforts when the goals one is hampering are not immoral: by making products be less useful, one is hampering the autonomous pursuit of goals by customers and thus putting effort into treating the customers less as ends.

This is not, I think, an area for an exceptionless principle. But there is, I think, a strong moral presumption against putting effort into making products be less flexible and less useful to customers. The principle is grounded in the need to avoid of unnecessary restrictions on pursuits of permissible ends. Here are some examples of practices that violate this presumption:

  • Locking down operating systems on tools such as phones, tablets and cameras in such a way as makes it more difficult, and in some cases illegal, for users to add new functionality.
  • Including legal terms in end-user license agreements that have a blanket prohibition on modification of the software.
  • Removing customization options available in earlier versions of software.
Conversely, there is a moral presumption in favor of making products be more flexible and more useful to customers, especially when this can easily be done. For instance, if a software developer can with little effort add an option that allows a piece of software to be customized by the user in some respect, to add that option is a way of displaying respect for the user's autonomy to choose ways of using the software that do not fit with the developer's own ideas on how the software is best used.

Of course, these are all only presumptive moral principles and can be overridden. Nonetheless, the reasons to override these principles need to have significant moral weight. And the reasons need to be particularly weighty when the reasons themselves are in tension with respect for the customer, as for instance if one locks down the firmware of a device in order to protect customers from themselves. Such locking down is much easier to justify in a device like a car or a pacemaker where human lives are stake, than in a device like a camera, where the worst that might happen to a customer is destruction of the device. Phones are somewhere in between: life can depend on it (and not just in really rare freak cases), but typically do not.

And of course there need be nothing wrong with a pricing structure on which less locked-down devices are more expensive, as long as all the prices are fair (I am inclined to accept something like the medieval fair-price doctrine, but I have no idea how to formulate it).

Monday, March 4, 2013

What to do with evidence that can't be handled probabilistically?

Suppose I get a piece of evidence apparently relevant to a proposition p that can't be handled using the standard Bayesian probabilistic apparatus. For instance, maybe I am trying to figure out how close a dart landed to the center of the target, and I learn that, mirabilis, the dart's landing point had rational numbers as coordinates. That's a case where the likelihoods of the evidence on all the relevant hypotheses are zero and there is no good limiting procedure to get around that. Or maybe I am trying to figure out how old Jones is, and I am told that Jones' age in years happens to be equal to a number that an angel yesterday picked out from among all natural numbers by a procedure that has no biases in favor of any numbers. That's a case where there are no meaningful likelihoods at all, since countable fair lotteries cannot be handled by the probability calculus. Or perhaps I am wondering whether some large number is prime, and I am told that infinitely many people in the multiverse think it is and infinitely many think it's not.

In some cases, the evidence should infect my credences in such a way that I no longer have a probability assigned to p. In other cases, I should just ignore the evidence. How to judge what should be done when? My intuitions say that I can just ignore what the infinitely many people in the multiverse think. But I don't know what to make of the other two pieces of evidence. I have a weak inclination to ignore the rational-number-coordinates fact. But the fact about Jones' age happening to match up with the infinite lottery result, that I don't think should be ignored. Maybe I should no longer have credences about Jones' age?

Can anything in general be said? Maybe, but I don't know how to do it.

Sunday, March 3, 2013

A visit to the stone age


I had some pleasant visits to the stone age this week.

A couple of weeks ago, I made an atlatl out of an approximately 30 inch piece of maple from a recalled crib, with the spur from an arrow point from a damaged arrow and the handle from drawer liner. And then I bought some bamboo stakes at Home Depot, straightened one over an alcohol flame, fletched it with duct tape, and sharpened the end.  Not exactly stone age materials, but it was more convenient this way.

It was only this week that I had a chance to try out the equipment at the range, both by myself and with my son.  I can throw 5.5 foot dart about 50 yards with the atlatl.  There is something rather magical about this long dart snaking through the air with only a little leverage.  (I also tried a much shorter dart, under 4 feet, but it flew terribly.  Sometimes it would flip over backwards.)  My accuracy leaves a lot to be desired, but at least I can occasionally hit the target boss at ten yards.  The rest is a matter of practice, I suppose.



Two kinds of pantheism

There are two kinds of pantheism. One might call them: reductive pantheism and world-enhancing pantheism.

Reductive pantheism says that the world is pretty much like it seems to us scientifically (though it might opt for a particular scientific theory, such as a multiverse one), and that God is nothing but this world. In so doing, one will be trying to find a place for the applicability of divine attributes for the world.

World-enhancing pantheism, however, says that there is more to the world than meets the eye. There is something numinous pervading us, our ecosystem, our solar system, our galaxy, our universe and all reality, with this mysterious world being a living organism that is God. World-enhancing pantheism paints a picture of a divinized world.

World-enhancing pantheism is a genuine religious view, one that leads to distinctive (and idolatrous!) practices of worshipful reverence for the world around us. Reductive pantheism, on the other hand, is a philosophers' abstraction.

It is an interesting question which version of pantheism is Spinoza's. His influence on the romantics is surely due to their taking him to be a world-enhancing pantheist, and he certainly sometimes sounds like one. But it is not clear to me that he is one. Though it may be that Spinoza has managed to do both: we might say that under the attribute of extension, we have a reductive pantheism, but the availability of the attribute of thought allows for a world-enhancing pantheism.

World-enhancing pantheism is idolatrous, while reductive pantheism is just a standard atheistic metaphysics with an alternate semantics for the word "God".

Saturday, March 2, 2013

Infinity, probability and disagreement

Consider the following sequence of events:

  1. You roll a fair die and it rolls out of sight.
  2. An angel appears to you and informs you that you are one of a countable infinity of almost identical twins who independently rolled a fair die that rolled out of sight, and that similar angels are appearing to them all and telling them all the same thing. The twins all reason by the same principles and their past lives have been practically indistinguishable.
  3. The angel adds that infinitely many of the twins rolled six and infinitely many didn't.
  4. The angel then tells you that the angels have worked out a list of pairs of identifiers of you and your twins (you're not exactly alike), such that each twin who rolled six is paired with a twin who didn't roll six.
  5. The angel then informs you that each pair of paired twins will be transported into a room for themselves. And, poof!, it is so. You are sitting across from someone who looks very much like you, and you each know that you rolled six if and only if the other did not.
Let H be the event that you did not roll six. How does the probability of H evolve?

After step 1, presumably your probability of H is 5/6. But after step 5, it would be very odd if it was still 5/6. For if it is still 5/6 after step 5, then you and your twin know that exactly one of you rolled six, and each of you assigns 5/6 to the probability that it was the other person who rolled six. But you have the same evidence, and being almost identical twins, you have the same principles of judgment. So how could you disagree like this, each thinking the other was probably the one who rolled six?

Thus, it seems that after step 5, you should either assign 1/2 or assign no probability to the hypothesis that you didn't get six. And analogously for your twin.

But at which point does the change from 5/6 to 1/2-or-no-probability happen? Surely merely physically being in the same room with the person one was paired with shouldn't have made a difference once the list was prepared. So a change didn't happen in step 5.

And given 3, that such a list was prepared doesn't seem at all relevant. Infinitely many abstract pairings are possible given 3. So it doesn't seem that a change happened in step 4. (I am not sure about this supplementary argument: If it did happen after step 4, then you could imagine having preferences as to whether the angels should make such a list. For instance, suppose that you get a goodie if you rolled six. Then you should want the angels to make the list as it'll increase the probability of your having got six. But it's absurd that you increase your chances of getting the goodie through the list being made. A similar argument can be made about the preceding step: surely you have no reason to ask the angels to transport you! These supplementary arguments come from a similar argument Hud Hudson offered me in another infinite probability case.)

Maybe a change happened in step 3? But while you did gain genuine information in step 3, it was information that you already had almost certain knowledge of. By the law of large numbers, with probability 1, infinitely many of the rolls will be sixes and infinitely many won't. Simply learning something that has probability 1 shouldn't change the probability from 5/6 to 1/2-or-no-probability. Indeed, if it should make any difference, it should be an infinitesimal difference. If the change happens at step 3, Bayesian update is violated and diachronic Dutch books loom.

So it seems that the change had to happen all at once in step 2. But this has serious repercussions: it undercuts probabilistic reasoning if we live in multiverse with infinitely many near-duplicates. In particular, it shows that any scientific theory that posits such a multiverse is self-defeating, since scientific theories have a probabilistic basis.

I think the main alternative to this conclusion is to think that your probability is still 5/6 after step 5. That could have interesting repercussions for the disagreement literature.

Fun variant: All of the twins are future and past selves of yours (whose memory will be wiped after the experiment is over).

I'm grateful to Hud Hudson for a discussion in the course of which I came to this kind of an example (and some details are his).