Monday, July 8, 2013

Winners and losers

I have tried three times before to formulate a paradox that comes up when countably infinitely many people roll dice, looking for the most compelling version. Let me try one more time. You observe Jones participating in the following game. She rolls a die, but doesn't get to see the result. Over the next six days, on day n she is said to be a winner if n is what she rolled. Thus, Jones will be a winner on exactly one of the next six days. Only after the end of the six days will Jones find out on which day she was a winner. Let's say it's now day n. What's the probability that Jones is a winner now? Surely it's 1/6.

But now you find out that infinitely many people independently participated in this game. And that, completely unsurprisingly, each of the possible six die outcomes were had by infinitely many players.

Furthermore, you find out that the organizers set up the following thing. On each of the next six days, they paired off the players, with the following properties:

  1. On any given day, every player is paired with some other player.
  2. No player is ever paired twice with the same player.
  3. A winner is always paired with a loser.
(Assuming that each die outcome was rolled by infinitely many people, it is possible to have such a pairing.)

So, it's day n, and you observe Jones and Smith paired together. What probability should you assign to Jones being a winner?

Suggestion 1: Jones and Smith are exactly on par, so you should assign equal probability to each being the winner. So you assign 1/2 to each. But now repeat this every day. By the same reasoning, on each of the six days, you have probability 1/2 that Jones is the winner on that day. But you can't assign probability 1/2 to Jones being the winner on each of the six days, since Jones is certain to be the winner on exactly one of the six days, and so your probabilities will be inconsistent.

Suggestion 2: Stick to your guns. Assign probability 1/6 to Jones being the winner. But what goes for Jones goes for Smith. So you should assign 1/6 to Smith being the winner. But this probability assignment is inconsistent as well, since it's certain that one of the two is the winner.

Well, that didn't work out too well, did it. Maybe it's time to get a bit more elaborate in our suggestons.

Suggestion 3: Constantly shifting probabilities. Let's say on day 1 you observe Jones and Smith paired together. You assign 1/2 to each being the winner on day 1. Then on day 2, you observe Jones and Kowalska paired together. You likewise assign 1/2 to Jones being the winner on day 2. But since you're not certain that Jones didn't win on days 3,4,5 or 6, you downgrade your probability that Jones won on day 1. I don't know what you do about days 3,4,5 or 6. But what do you do about Smith? On day 2, you observe Smith and Ahsan paired together. If you downgraded your probability that Jones was the winner on day 1, you should also downgrade your probability that Smith was. But that results in inconsistent probabilities, too. Moreover, this suggestion crucially depends on the days being sequential. But we can come up with a variant story on which all the pairings are simultaneous. For instance, let's suppose that the organizers put strings, with attached tags printed 1,2,3,4,5 or 6, between pairs of participants—a nasty tangle of them—with the property that each participant has six strings attached to her, with the property that the string tagged n joins a participant who rolled n with one who didn't, and that each participant has coming out of her strings numbered 1,2,3,4,5 or 6. Jones and Smith have string number 1 joining them. What's the probability that Jones rolled 1? The same issues come up, but simultaneously.

Suggestion 4: No numerical probabilities. Once you see the pairings, you no longer assign numerical probabilities to who is the winner when. Perhaps you assign interval-valued probabilities, presumably assigning the interval [1/6,1/2] to each ``x is a winner on day n'' hypothesis. Or perhaps you have no probability assignments, simply sticking to non-probabilistic claims like that Jones is a winner on exactly one of the six days, and ditto Smith, and Jones is a winner on day 1 if and only if Smith is not a winner on day 1.

This may be what we have to do. It tracks the fact that the most obvious ways to arrange a pairing satisfying (1)—(3) all require a numerical ordering on the participants, and we are given no information about such numerical orderings, but there is no meaningful uniform probability distribution over infinite orderings.

But Suggestion 4 doesn't come cheap, either. We still have to answer questions about what bets are prudent under the circumstances.

Moreover, we have to ask at which stage of the experiment you drop your numerical probabilities. The radical suggestion is that you do so when you find out that there are infinitely many die rollers. This is radical: it means that we can't do epistemic probabilities if we know we live in a universe with infinitely many die rollers. That may be true (and if true, it means that scientific arguments for an infinite multiverse are self-defeating), but it's really radical.

The less radical options are: (a) drop your numerical probability assignments, say for Jones, when you find out that a pairing satisfying (1)—(3) will actually be announced and (b) drop your numerical probability only when you actually see the pairings.

I think (a) might actually be better. For as soon as you know that such a pairing will be announced, you can stipulate a name, say "JonesPair1", to refer to the first person that Jones will be paired with. And it would be odd if you then changed your probabilities for Jones and JonesPair1 on day 1 just because you found out that JonesPair1 had the legal name "Smith", that she was female, etc.

24 comments:

William said...

If we choose 1/6 of an infinite sampling pool as winners on day 1 and match them with another 1/6 of the pool as losers, doesn't that mean that 4/6 of the infinite pool is infinitely deferred from being chosen for either role on that particular day?

If so, the question is how we know Jones is going to be in the game at all?

If we set up the game after Jones rolls, knowing Jones cannot be excluded, that is different (1/6) from the case when the game is fully set up, and then we choose first a pair, then a person we call Jones from the day 1 pairings (1/2).

Alexander R Pruss said...

No, all the winners are paired with all the losers on each day.

William said...

Did you pick Jones before or after the pairings were completed?

If before, 1/6, if after, 1/2.

Selection bias in the pairings.

Alexander R Pruss said...

Suppose you're watching all of them?

William said...


Given a choice out of the infinite sequence

(A) 1,2,3,4,5,....

the probability of picking an even counting number is 1/2.

Given the sequence

(B) 1,2,4,6,8,10,3,12,14,16,18,5,20,22,24,26,...

the probability of picking an even counting number is 4/5.

The (B) sequence is an arrangement which incorporates a selection bias.

Since we can rearrange (A) to be (B) and include all of A, it
makes a difference whether we re-arrange first or pick first!

Alexander R Pruss said...

Why would the order matter for probability?

William said...

What matters is which sequence we picked from, sorry if that was unclear.

I meant, if we first have arrangement (A) and then re-arrange to sequence (B).

If we pick a number from (A), tag it somehow, then re-arrange to (B), the tagged number probably moves around in going into the (B) sequence, but it is still 1/2 that it was odd versus even when we picked it from (A).

Dagmara Lizlovs said...

About 20 years ago I worked for a boss who knew some one who occasionally went to Atlantic City to gamble. This person was reasonable successful at gambling, and my boss gave me her secrets:

1) She played craps. This was the game with the best probability of winning.

2) Be satisfied with winning a modest sum.

3) As soon as a modest sum is won - LEAVE THE TABLE. Leaving the table at this point is the crucial key.

Alexander R Pruss said...

I thought you could actually consistently maintain a modest per-hour win at blackjack if you knew what you were doing, at least 20 years ago (I think there may have been some changes). When I was a student, I had a friend--very smart applied math guy--who practiced a lot of blackjack, doing computer simulations, before going to a computer graphics conference in Las Vegas. He said he was making about $10 per hour at blackjack.

Austin said...

Hi Professor Pruss,

A couple things:

1) Yes, my understanding is that Blackjack has the lowest house margin of any casino game, at roughly .01% Over the long run one will come close to even, but if one always walks away at a moderate upswing, I think a profit could be made.

2) What is your conclusion from this paradox? That a real infinity is impossible?

Thanks,
Austin

Alexander R Pruss said...

Actually, even in the case of a fair game, you cannot do better than even just by deciding to walk away when you're ahead. The problem is that you might never end up ahead--you might just go down and never return up, in which case you'll have to stop playing some time (when you die or if you run out of money). The optional stopping theorem tells you that there is no way of choosing a stopping time for a game (under some conditions) such that you do on average better than even.

Alexander R Pruss said...

Oh, as for what I conclude, I guess it's that our probabilistic methods of reasoning fall apart in such infinite contexts.

Dagmara Lizlovs said...

"I thought you could actually consistently maintain a modest per-hour win at blackjack if you knew what you were doing."

I have heard that something like that myself. I remember my dad also saying some things about blackjack like that. I do recall my supervisor talking about the crap player. All this was a long time ago. I also remember about 20 years ago Dr. Robert Abernathy was giving a Montecarlo demonstration how you can definitely win in gambling when he came to our place to teach Weibull Analysis to our group. I cannot recall if the game was blackjack or not. I do remember that the super returns came after playing several thousand times, and if you have substantial cash to start with.

For more on Dr. Abernathy:

http://www.barringer1.com/drbob-bio.htm

I have taken Weibull training from him and Wes Fulton a couple of times many, many years ago.

Dagmara Lizlovs said...

"Actually, even in the case of a fair game, you cannot do better than even just by deciding to walk away when you're ahead. The problem is that you might never end up ahead--you might just go down and never return up,..."

Ah, but what will keep you there is the psychological impact of something called variable scheduled reward. You might get a little bit of a turn around every so often, and there is always a sense that you might just make it in the next go around. Here's how it works and I've thoroughly learned this technique from my Thoroughbred, Merlin.

My problem was how to get Merlin to do things my way with the least amount of resistance on his part. My body had stopped being damage tolerant and rodeo riding is just not my cup of tea. I decided to go the route of variable scheduled rewards. When saddling up, I'd fill my saddle bags with a bunch of carrots broken up into pieces. Merlin knew they were there. Initially when I'd ask him to do something, I'd reward each success on his part. Then I'd change the reward schedule, sometimes I'd reward the first response to my cues, sometimes the second, sometimes the third etc. And I'd mix them up so that Merlin never knew which positive response to my cues would get him the reward. Then at some point, Merlin would hit the "jackpot". I'd hop of his back, make a big fuss over him and give him the rest of the carrots in the saddle bag. Only he never knew when he'd hit the "jackpot". You should have seen the gusto with which that horse would be responding to all my inputs!

Casinos train us to keep playing the same way only they are a lot less benevolent, and the variable scheduled reward systems is very heavily tilted in their favor. One of the women I was boarding with was going to the casinos to gamble. I told her that the casino was doing nothing but training her to part with her hard earned cash the same way I trained my horse. She replied "I know what you've been doing to your horse, and it's not my money I'm gambling with."

Alexander R Pruss said...

Actually, I think casinos have the system only slightly tilted in their favor, I've heard that rather less than lotteries, but still make lots of money thanks to the wonders of the law of large numbers.

Dagmara Lizlovs said...

"Actually, I think casinos have the system only slightly tilted in their favor, ..."

Here is a bit of a human face on that. Over 20 years ago I had a boyfriend whose father was a gambling addict. Many years ago, my sister had a friend whose husband was a gambling addict. I'm only too aware of the devastation that a "system only slightly tilted in their favor" can wreak on families and marriages.

Back when I was working up in Trenton, New Jersey, I had a training class held in Atlantic City. It was one of those managing people one day motivational type seminars, and it was held in a room in one of the casinos. After class I tried my hand at the slots. I watch this one guy, obviously of lower income than myself, judging from his work uniform, constantly throwing in money into the slots and pulling the lever like there was no tomorrow, money he probably couldn't afford to loose. After loosing something like $40, I up and left. In the parking lot I met an older couple. They asked me if I won anything, I said no, I lost $40. They replied that that's nothing, they had lost several hundred dollars, and they sounded like they were trying to one up me even though they were far more substantially bigger losers.

Alexander R Pruss said...

Looks like here's a story that's very similar. (Thanks, LB, for pointing me to it.)

Alexander R Pruss said...

Indeed that slight tilt ruins lives. It's devilishly clever.

Dagmara Lizlovs said...

Genesis 3 vs 1 does state that the serpent was the most cunning (subtle) of all the wild animals the Lord God had created.

The thing about it is that that slight tilt doesn't look so bad at all. In fact, sometimes it will appear as something good. In fact, it need a lesser good in order to function.

IanS said...

If I liked indifference distributions, I might answer like this:

Option 3 can be made to work, at least for any finite number of known pairs. The known pairs relate to a finite number (say N) of different players. For these players, list all 6^N combinations of dice rolls. Strike out all combinations that are inconsistent with the known pairs, and assign equal probabilities to the remaining ones. (All the other players are assigned 1/6 to each day, independently for each player.) The resulting distribution is as “uniform” as it can be, consistent with the known pairs.

If, for example, my known pairs are all from day 1, I will assign each paired player 1/2 for day 1 and 1/10 for the other days. But if I know all Jones’ pairs and no other pairs, I will assign 1/6 to Jones for all days and 1/6 to all her partners for the day they are paired with her (work it out if you doubt it). This seems reasonable. This method, though tedious, can in principle be applied to any finite number of known pairs.

What about an infinite number? First, unlike the organizers (who have the super powers needed to form an infinite number of pairs in a finite time), I am an ordinary human. I can’t know an infinite number of pairs. Second, I already know that indifference does not work even in the simplest infinite case (the infinite lottery), so I don’t expect it to work here.

IanS said...

If I preferred more objective probabilities, I might answer like this: Option 4, unknown rather than interval-valued probabilities, and change when I see a pair.

I don’t know how the organizers do the pairing, so I can’t calculate probabilities that depend on the pairing. Suppose I see Jones and Smith paired on day one. The organisers may, for all I know, have a rule that they will pair Jones with Smith only on Jones’ winning day. This would make Jones a winner. Or maybe they will pair Jones with Smith only on Smith’s winning day. This would make Jones a loser. So I can say nothing about the probability that Jones is a winner (and interval-valued probabilities are ruled out).

Some pairings lead to tricky logical relations. Suppose for example that Jones and Smith are paired on day 1, Smith and Robinson on day 2 and Robinson and Jones on day 3. Then Jones must have rolled either 1 or 3, so her partner on day 2 must have rolled 2 (and days 4, 5 and 6 are similar). So I have to say that the probabilities that are not logically determined are unknown.

When to change? When I learn something that depends on the pairing. When I see Jones paired, I learn which player she is paired with. This is precisely what a placeholder like JonesPair1 does not tell me.

Alexander R Pruss said...

I missed the point about there being such logical relations. Nice point.

IanS said...

Correction to my first post, 2nd last paragraph: ”...and 1/6 to all her partners...” should be 5/6.

On logical relations: One could simply disallow cyclic pairings, perhaps as an extension of property 2. It would still be possible to pair all the players: First find all the first player’s partners. For each of these 6 players in turn, find the 5 remaining partners from previously unpaired players. For each of these 30 players in turn, repeat the process, etc. This procedure would pair everyone with no loops.

Would you like to say more on why betting odds may be a problem for suggestion 4?

Alexander R Pruss said...

Well, it does seem like there are systems of bets that are good and systems of bets that are bad, and suggestion 4 doesn't allow gauging that. (This is Elga's objection that imprecise probabilities get one to pass by good books.) But now that I think about it, this isn't so pressing a worry.