Saturday, August 30, 2014

Preemption of laws of nature

I've been thinking about cases of preemption of laws of nature. Normally, an instance of a law is explained by the law. Let's say some dose of cyanide is lethal to chickens within minutes. Then it seems it's a law that a chicken is dead ten minutes after ingesting that dose (take "that dose" to be an abbreviation for the actual dose) of cyanide. Normally, then, if a chicken ingested the cyanide more than ten minutes ago and now is dead, that's because of the law.

But what if the law is preempted? Suppose that a few seconds after taking the cyanide, the chicken is eaten by an unfortunate fox. Then the case is still an instance of the law, but is not explained by the law. (This is a tweak of an improvement by Brad Rettler of a case I used in class today.) The law is true, but preempted by the fox, and hence it is not explanatory.

So it seems that for an event to happen because of a law is something more than for it to fall under the law.

Consider a way of getting out of this. Perhaps it's not a law that

  1. a chicken is dead ten minutes after ingesting that dose of cyanide.
Maybe instead it's a law that
  1. if meanwhile there are no other causes of death and it's not the case that the chicken dies causelessly, then a chicken is dead from the cyanide ten minutes after ingesting this dose of cyanide.
If so, then we have to suppose lawhood not to be both closed under conjunction and relevant entailment, since (1) follows from the conjunction of (2) with the law that dead chickens stay dead. That's an interesting result. Moreover, we have to deny that laws are universal generalizations that support counterfactuals, since (1) supports counterfactuals every bit as well as (2) does.

All this can be done. Still, (1) looks much more like a law than (2) does, and it seems to me that rather than fooling with something like this, it's better to allow that laws can be preempted. An instance of a law need not be true because of that law.

There might even be laws where in fact typically instances of the law are not true because of the law. Maybe, because of biological clocks in our cells, it's a law that nobody survives to be 130 years of age. But most people don't die because of this law. They die of heart attacks, car accidents, cancer, etc.

Friday, August 29, 2014

How impossible can we get?

I've been thinking about a framework for really impossible worlds. The first framework I think of is this. A world w is a mapping (basically, a function, except that the propositions don't form a set) from propositions to truth values. Thus, if w0 is the actual world, w0(<the sky is blue>)=T and w0(<2+2=5>)=F. But there will be a world w with all sorts of weird truth assignments, for instance where the conjunction is true but the conjuncts are false, or where p is false but its negation is also false.

But I then wondered if this captures the full range of alethic impossibilities. What about impossibilities like this: Worlds at which <2+2=4> has no truth value? Worlds at which every proposition is both true and false? To handle such options it's tempting to loosen the requirement that w is a mapping to the requirement that it be a relation. Thus, some propositions might not be w-related to any truth value and some propositions might be w-related to multiple truth values. But we can get weirder than that! What about worlds w at which the truth value of <2+2=4> is Sherlock Holmes? Nonsense, you say? But no more nonsense than something being both true and false. So perhaps w should be a relation not just between propositions and truth values, but propositions and any objects at all, possible or not. But even that doesn't exhaust the options of truth assignments. For what about a world where truth is assigned to every cat and to no proposition, or where instead of <2+2=4> having truth, truth has <2+2=4>? So perhaps worlds are just relations between objects, impossible or possible?

Of course, it feels like we've lost grip on meaningfulness somewhere in the last paragraph. But it's not clear where. My suggestion now is that none of the complications are needed. In fact, even the initial framework where a world is a truth assignment may be needlessly complicated. Let's take instead the simpler framework that a world is a collection of propositions.

Thus, p is true at w if and only if p is a member of w. And p is false at w if and only if ~p is a member of w.

But what about the bizarre options? On this framework, for any world w, either <2+2=4> is a member of w and hence true at w or it's not. What about the possibility that it is both true and non-true at w? I think the framework can handle all the bizarre possibilities provided that we understand them as world-internal. What is true at w is a question external to w, a question to be settled by the classical logic that is actually correct. Either p is true at w or it's not, and it can't be both true and non-true. But, nonetheless, although while it can't be that p is true at w and not true at w, it can be that p is true at w and p is false at w (just suppose both p and ~p are members of w). So that p is false at w does not imply the denial of the claim that p is true at w.

All the bizarreness, however, is to be found in world-internal claims. Let's say that p is (not) true in w provided that the proposition <p is (not) true> is true at w (in the external sense), i.e., <p is true> is a member of w. Likewise, say that p is (not) false in w provided that <p is (not) false> is true at w. And so on: in general, S(p) in w provided that <S(p)> is true at w. Then while truth-at w is relatively tame, truth- and falsity-in w can be utterly wild. We can have p true in w and p not true in w. We can have a world w in which <2+2=4> has the truth value Benjamin Franklin and is also false and true. There will be a world in which ~(2+2=4) but it is nonetheless true that 2+2=4. And so on. It's all a matter of getting the scope of the world-relativizing operator right.

Thursday, August 28, 2014

A very impossible world?

In a criticism of the Pearce-Pruss account of omnipotence, Scott Hill considers an interesting impossible situation:
  1. Every necessary truth is false.

While the criticism of the Pearce-Pruss account is interesting, I am more interested in a claim that Hill makes that illustrates an interesting fallacy in reasoning about impossible worlds. Hill takes it that a world at which (1) holds is a world very alien from ours, a world at which there are "infinitely many" "false necessary truths".
But that's a fallacious inference from:
  1. (∀p(Lp→(p is false))) is true at w
(where Lp says that p is necessary) to
  1. p(Lp→(p is false at w)).

Indeed, there is an impossible world w with the property that (1) is true at w and there is no necessary truth p such that p is false at w. Think of a world as an assignment of truth values to propositions. A possible world is an assignment that can be jointly satisfied—i.e., it is possible that the truth values are as assigned. An impossible world is an assignment that cannot be jointly satisfied. Well, let w0 be the actual world. Then for every proposition p other than (1), let w assign to p the same truth value as it has according to w0. And then let w assign truth to (1).

Wednesday, August 27, 2014

A better way to run Kalaam arguments?

Crucial to the Kalaam argument is showing that the universe has only a finite past. A standard approach is:

  1. If the universe has an infinite past, there is an actual infinite.
  2. If an actual infinite is possible, Hilbert's Hotel is possible.
  3. Hilbert's Hotel is impossible.
  4. So, the universe doesn't have an infinite past.
Apart from the real challenge, which is defending (3), there are at least three distracting difficulties here.

First, one needs to defend the often not very clear distinction between a potential infinite and an actual infinite.

Second, holding on to the argument while believing in an infinite future afterlife requires a very particular theory of time: the growing block theory. For consider the two alternatives: eternalism and presentism. On eternalism, future infinites are no less actual than past ones, and so an infinite future is just as much a problem as an infinite past. On presentism, neither past nor future infinites are actual, and premise (1) fails.

Third, the conditional in (2) is dubious. Not all actual infinites are of the same sort. An actual infinity of past events does not seem to be close enough to Hilbert's Hotel—a present infinity of rooms—to allow inferring the possibility of the Hotel from the possibility of the past infinity.

Here is an alternative:

  1. If the universe has an infinite past, a simultaneous infinity of objects is possible.
  2. If a simultaneous infinity of objects is possible, Hilbert's Hotel is possible.
  3. HIlbert's Hotel is impossible.
  4. So, the universe doesn't have an infinite past.

Advantages: We don't need any murky distinction between actual and potential infinites, just the much clearer notion of a simultaneous infinite.[note 1] An infinite future is not an issue, and any theory of time can be plugged in. Finally, the move from a simultaneous infinity of objects to Hilbert's Hotel in (6) is much more plausible than the corresponding move in (2). For if a simultaneous infinity of objects is possible, surely they could be shaped like rooms.

The one disadvantage, of course, is that (5) needs an argument. Here it is. If an infinite past is possible, surely it's possible that at infinitely many past times an object came into existence that has not yet ceased to exist. But if that happened, there are now infinitely many objects, a simultaneous infinity.

Still, one might worry. How can we argue for the possibility of an object coming into existence at infinitely past times, given an infinite past? Well, we can imagine a law of nature on which when two particles collide they are annihilated into four particles, with correspondingly smaller individual mass-energy, and we can imagine that these particles by law cannot otherwise disappear. We can then suppose that there have always been such particles, and that during each of a past infinity of years there was at least one collision. Then that very plausibly possible story implies that there is a present infinity of particles.

I think the difficulties in arguing for (5) are far outweighed by the advantages of the simultaneous infinity formulation of the argument.

Tuesday, August 26, 2014

Upscaling baud rates

There is almost no philosophy here. :-)

Suppose I want to send some data to a device that runs at baud rate a from a serial device that runs at a different baud rate b.  If a > b, I am out of luck.  Probably likewise I'm out of luck if they're close.  But if a is much smaller than b, then I can emulate the lower baud rate a from the higher baud rate device, by rescaling the bit data, taking into account mark and stop bits.  Unfortunately, required mark and stop bits will introduce some problems into the bit stream--but if you're lucky, your receiving device will just treat these problems as a glitch and ignore them.

For instance, suppose I want to send the hex byte
53 (ASCII 'S')
from a device running at 57600 baud (8 N 1) to a device running at 9600 baud (also 8 N 1).  It turns out that if I transmit
E0 7F 00 1F 7E F8  
this may be a decent approximation.

For when I send the longer byte string, what actually goes over the wire is:
0000001111011111110100000000010111110001001111110100001111110
(where red zeros are start bits and blue ones are stop bits, and data is sent least-significant-bit first).
The bit pattern for hex 53 (including the start and stop bits) should be:
0110010101
which with a perfect 6X (=57600/9600) rescaling would be:
000000111111111111000000000000111111000000111111000000111111

Putting the two side-by-side we get the following bit differences:
0000001111011111110100000000010111110001001111110100001111110
000000111111111111000000000000111111000000111111000000111111
(the final zero in the first stream doesn't matter).  

If you're lucky, the device you're talking to will take the highlighted bits to be glitches and ignore them and the data will get through just fine.

The code for scaling the bitstream is here (it's GPL3 on the face, but feel free to use DataLink.java under the BSD 3-clause license if you prefer). 

Would one ever need to do this?  Maybe.  I want to make the Mindflex toy EEG headset behave just like the MindWave Mobile EEG headset.  To do that, I hook up a Bluetooth-to-TTL-serial link to the Mindflex toy headset, and I need to send the hex data 02 to the toy headset to switch it to the mode that the MindWave Mobile headset runs at.  Unfortunately, that 02 needs to be sent at 9600 baud, and then the headset will switch to 57600 baud.  This alright if you can control the baud rate of the serial link on the fly, as has been the case when I've been using a Brainlink with custom firmware (and an Android app to switch to the mode that works like the MindWave).  But I want to switch to a simple HC-06 link, and those can't adjust baud rate dynamically, so I will have to use the above trick.  (I've already tested the trick using the Brainlink and that works.)  

I said there was almost no philosophy here.  The little bit of philosophy is the curious self-observation that in my programming work, a not insignificant proportion of what I do is fooling devices.  In the above case, I am fooling a device that expects a 9600 baud connection into "thinking" it's getting one.  This is an entirely morally innocent kind of "deception".  I suppose the Kantian point is that when we try to deceive people, we are treating them in a way that is appropriate for machines.

Monday, August 25, 2014

The five minute hypothesis reconsidered

Let S1 be the state the universe in fact had five minutes ago and let S0 be the state the universe in fact had "at" (it may be a limiting boundary condition) the Big Bang. Is S0 any more amenable to coming into existence causelessly ex nihilo than S1? Surely not! (Can one even compare compare probabilities of causeless ex nihilo poppings-into-existence?) But then why should the atheist think that it was S0 rather than S1 that came into existence causelessly ex nihilo? Neither hypothesis is intrinsically more likely than the other, and both fit our observations equally well.

The theist has no such problem, because there are good value-based reasons why S0 is more likely to be created than S1.

Friday, August 22, 2014

Freedom and consciousness

The following seems a logically possible story about how some contingent agent chooses. The agent consciously deliberates between reasons in favor of action A and reasons in favor of action B. The agent then forms a free decision for A—an act of will in favor of A. This free decision then causes two things: it causes the agent to do A and it causes the agent to be aware of having decided in favor of A.

Not only does the above story seem logically possible, but it seems likely to be true in at least some, and perhaps even all, cases of our free choices.
But if the above story is true, then it will also be possible for the causal link between the agent's decision and the agent's awareness of the decision to be severed, say because someone hit the agent on the head right after making the decision and right before the agent was aware of the decision, or because God miraculously suspended the causal linkage. In such a case, however, the agent will still have decided for A, and would have done so freely, but would not have been aware of so deciding.

Thus it is possible to freely decide for A without being aware that one has freely decided for A. This no doubt goes against common intuitions.

I think the main point to challenge in my story is the claim that it is possible that the decision causes the awareness of the decision. Maybe a decision for A has to be the kind of mental state that has awareness of its own nature built right in, so the awareness is simultaneous with and constituted by the decision. I think this is phenomenologically implausible. It seems to me that many times I am only aware of having decided to perform an action when I am already doing the physical movements partly constituting the action. But presumably the movements (at least typically) come after I've made up my mind, after my decision.

It would be a strange thing to have decided but not to have been aware of how one has decided. Perhaps we can imaginatively wrap our minds around this by thinking about cases where an agent remembers deliberating but doesn't remember what decision she came to. Surely that happens to all of us. Of course, in typical such cases, the agent was at some point aware of the outcome of the deliberation. So this isn't going to get our minds around the story completely. But it may help a little.

In the above, I want to distinguish awareness of choice from prediction of choice. It may be that even before one has made a decision, one has a very solid prediction of how one's choice will go. That prediction is not what I am talking about.

A criticism of some consequence arguments

The standard consequence argument for incompatibilism makes use of the operator Np which abbreviates "p and no one has or has ever had a choice about whether p". Abbreviating the second conjunct as N*p, we have Np equivalent to "p and N*p". The argument then makes use of a transfer principle, like:

  • beta-2: If Np and p entails q, then Nq.
When I think about beta-2, it seems quite intuitive. The way I tend to think about it is this: "Well, if I have no choice about p, and p entails q, then how can I have a choice about q?" But this line of reasoning commits me not just to beta-2, but to the stronger principle:
  • beta-2*: If N*p and p entails q, then N*q.
But beta-2* is simply false. For instance, let p be any necessary falsehood. Then clearly N*p. But if p is a necessary falsehood, then p entails q for every q, and so we conclude—without any assumptions about determinism, freedom and the like—that no one has a choice about anything. And that's unacceptable.

This may be what Mike Almeida is getting at in this interesting discussion which inspired this post.

Of course, this counterexample to beta-2* is not a counterexample to beta-2, since although we have N*p, we do not have Np, as we do not have p. But if the intuition driving one to beta-2 commits one also to beta-2*, then that undercuts the intuitive justification for beta-2. And that's a problem. One might still say: "Well, yes, we have a counterexample to beta-2*. But beta-2 captures most of the intuitive content of beta-2*, and is not subject to this counterexample." But I think such arguments are not very strong.

This is not, however, a problem if instead of accepting beta-2 on the basis of sheer intuition, one accepts it because it provably follows from a reasonable counterfactual rendering of the N*p operator.

Thursday, August 21, 2014

A possible limitation of explicitly probabilistic reasoning

Bayesian reasoners will have their credences converge to the truth at different rates depending on their prior probabilities. But it's not as if there is one set of prior probabilities that will always lead to optimal convergence. Rather, some sets of priors lead to truth faster in some worlds and some lead to truth faster in others. This is trivially obvious: for any world w, one can have priors that are uniquely tuned for w, say by assigning a probability extremely close to 1 to every contingent proposition true at w and a probability extremely close to 0 to every contingent proposition false at w. Of course, there is the question of how one could get to have such priors, but one might just have got lucky!

So, a Bayesian reasoner's credence converges to the truth at rates depending on her priors and what kind of a world she is in. For instance, if she is in a very messy world, she will get to the truth faster if she has lower prior credences for elegant universal generalizations, while if she is in a more elegant world (like ours!), higher prior credences for such generalizations will lead her to truth more readily.

Now suppose that our ordinary rational ampliative reasoning processes are not explicitly probabilistic but can be to a good approximation modeled by a Bayesian system with a prior probability assigment P0. It is tempting to think that then we would do better to explicitly reason probabilistically according to this Bayesian system. That may be the case. But unless we have a good guess as to what the prior probability assignment P0 is, this is not an option. Sure, let's suppose that our rational processes can be modeled quite well with a Bayesian system with priors P0. But we won't be able to ditch our native reasoning processes in favor of the Bayesian system if we do not have a good guess as to what the priors P0 are. And we have no direct introspective access to the priors P0 implicit in our reasoning processes, while our indirect access to them (e.g., through psychological experiments about people's responses to evidence) is pretty crude and inaccurate.

Imagine now that, due to God and/or natural selection, we have ampliative reasoning processes that are tuned for a world like ours. These processes can be modeled by Bayesian reasoning with priors P0, which priors P0 would then be tuned well for a world like ours. But it may be that our best informed guess Q0 as to the priors will be much more poorly tuned to our world than the priors P0 actually implicit in our reasoning. In that case, switching from our ordinary reasoning processes to something explicitly probabilistic will throw away the information contained in the implicit priors P0, information placed there by the divine and/or evolutionary tuning process.

If this is right, then sometimes or often when trying to do a formal probabilistic reconstruction of an intuitive inductive argument we will do less well than simply by sticking to the inductive argument. For our ordinary intuitive inductive reasoning is, on this hypothesis, tuned well for our world. But our probabilistic reconstruction may not benefit from this tuning.

On this tuning hypothesis, experimental philosophy is actually a good path to epistemological research. For how people reason carries implicit information as to what priors fit our world well.

Wednesday, August 20, 2014

Induction over brute facts, and the initial state of the universe

Suppose that we've observed a dozen randomly chosen ravens and they're all black. We (cautiously) make the obvious inference that all ravens are black. But then we find out that regardless of parental color, newly conceived raven embryos have a 50% chance of being black and a 50% chance of being white, and that they have equal life expectancy in the two cases. When we find this out, we thereby also find out that it was just a fluke that our dozen ravens were all black. Thus, finding out that it's random with probability 1/2 that a given raven will be black defeats the obvious inference that all ravens are black, and even defeats the inference that the next raven we will see will be black. The probability that the next raven we observe will be black is 1/2.

Next, suppose that instead of finding out about probabilities, we find out that there is no propensity either way of a conception resulting in a black raven or its resulting in a white raven. Perhaps an alien uniformly randomly tosses a perfectly sharp dart at a target, and makes a new raven be black whenever the dart lands in a maximally nonmeasurable subset S of the target and makes the raven be white if it lands outside S. (A subset S of a probability space Ω is maximally nonmeasurable provided that every measurable subset of S has probability zero and every measurable superset of S has probability one.) This is just as much a defeater as finding out that the event was random with probability 1/2. It's still just a fluke that the dozen ravens we observed were all black. We still have a defeater for the claim that all ravens are black, or even that the next raven is black.

Finally, suppose instead that we find out that ravens come into existence with no cause, for no reason, stochastic or otherwise, and their colors are likewise brute and unexplained. This surely is just as good a defeater for inferences about the colors of ravens. It's just a fluke that all the ones we saw so far were black.

Now suppose that the initial state of the universe is a brute fact, something with no explanation, stochastic or otherwise. We have (indirect) observations of a portion of that initial state: for instance, we find the parts of the state that have evolved into the observed parts of the universe to have had very low entropy. And science appropriately makes inferences from the parts of the initial state that have been observed by us to the parts that have not been observed, and even to the parts that are not observable. Thus, it is widely accepted that the whole of the initial state had very low entropy, not just the part of it that has formed the basis of our observations. But if the initial state and all of its features are brute facts, then this bruteness is a defeater for inductive inferences from the observed to the unobserved portions of the initial state.

So some cosmological inductive inferences require that the initial state of the universe not be entirely brute.

Monday, August 18, 2014

Baylor - Georgetown - Notre Dame 2014 conference

The 2014 Baylor/Georgetown/Notre Dame Philosophy of Religion Conference will be held at Georgetown University October 9 through October 11.  All sessions will be held in New North 204.  Below is the schedule.  Please contact Mark Murphy (mark.murphy@georgetown.edu) if you plan to attend.  Please also let him know if you need conference hotel information.  And if it would help you to get funding to attend the conference if you served as a chair for one of the sessions, let him know that, also.

Thursday, October 9

7-8:30 PM   Karen Stohr (Georgetown), “Hope for the Hopeless” (Commentator: Micah Lott, Boston College)

8:30 PM
Reception in New North 204


FridayOctober 10

9:00-10:25 Kathryn Pogin (Northwestern), "Redemptive or Corruptive? The Atonement and Hermeneutical Injustice" (Commentator: Katherin Rogers, Delaware)

10:35-12  Neal Judisch (Oklahoma), “Redemptive Suffering” (Commentator: Siobhan Nash-Marshall, Manhattanville)

12-2:30   Lunch on own

2:30-3:55  Christian Miller (Wake Forest), “Should Christians be Worried about Situationist Claims in Psychology and Philosophy?” (Commentator: Dan Moller, Maryland)

4:05-5:30  Chris Tucker (William and Mary), “Satisficing and Motivated Submaximization (in the Philosophy of Religion)” (Commentator: Kelly Heuer, Georgetown)

Saturday, October 11

9:00-10:25 Julia Jorati (Ohio State), "Special Agents: Leibniz on the Similarity between Divine and Human Agency" (Commentator: Kristin Primus, Georgetown/NYU)

10:35-12  Kris McDaniel (Syracuse), “Being and Essence” (Commentator: Trenton Merricks, UVA)

12-2:30 Lunch on own

2:30-3:55  Meghan Page (Loyola (MD)/Baylor), “The Posture of Faith: Leaning in to Belief” (Commentator: Mark Lance, Georgetown)

4:05-5:30  Charity Anderson (Baylor), “Defeat, Testimony, and Miracle Reports” (Commentator: Andy Cullison, SUNY-Fredonia)

BGND2014 is organized by Mark Murphy (Georgetown), Jonathan Kvanvig (Baylor), and Michael Rea (Notre Dame). 

Sunday, August 17, 2014

The start of meaning

The first meaningful performance did not gets its meaning from earlier meaningful performances. So it seems that meaning preceded meaningful performances. Let's say the first meaningful performance was a pointing to a distant lion. Then pointing had a meaning before anybody meaningfully pointed.

Well, things aren't quite so simple. Maybe there is no "before" before the first meaningful performance, since maybe the first meaningful performance is an eternal divine meaningful performance (perhaps the generation of the Logos?). Or maybe the first meaningful performance got its meaning from later meaningful performances (Sellars seems to think something in this vicinity with respect to the relationship between thoughts and concepts) in some sort of virtuous circularity.

The theistic move seems just right to me. The virtuous circularity move is, however, not going to work. For the circle of performances then had a meaning independently of itself, and so we still get a meaningful performance—perhaps by a community—that doesn't get its meaning from anywhere else.

One may have vagueness worries about the idea of a "first meaningful performance". Still, in a supervaluatist framework we can fix a precisification of "meaningful performance", and then the argument will go through.

Saturday, August 16, 2014

Mercenary motives

A stranger is drowning in a crocodile-infested river. To pull him out, you'd need to go in the water yourself, and there is a moderate chance (say, 25%) that you would be eaten. You have no dependents to whom you owe it not to risk your life, but of course you don't like being eaten by a crocodile. It would be praiseworthy for you to engage in this heroic act. But if you don't do it, you aren't doing anything morally wrong. I want the story to be set up so this is clearly a case of supererogatoriness.

You have decided not to do it. But then the stranger offers you a million dollars. And so you leap in and pull him to safety.

You're not particularly morally praiseworthy. But have you done anything morally wrong in acting on the mercenary motive? Nothing wrong would have been done had you refused to take the risk at all. Why would it be wrong to do it for money? Indeed, is your case any different from that of someone who becomes a firefighter for monetary reasons? But wouldn't it be odd if it were permissible to be a businessman for profit but wrong to be a firefighter for profit?

So the mere presence of a mercenary motive, even when this motive is a difference-maker, does not make an action wrong. Nor does this constitute the agent as vicious.

But what if the mercenary motive were the only operative motive? That would be constitutive of vice. There need be no vice if the decision whether to save another at significant risk to self is decided in favor of caution, and there need be no vice if money overrides the caution. But if the mercenary motive were the only motive, then that suggests that had there been neither danger to you nor promise of payment, you wouldn't have pulled out the stranger, because you simply don't care about the stranger's life. And that's vicious.

It is morally important, thus, that care for the stranger's life be among your motives, even if the mercenary motive is necessary (or even if it is necessary and sufficient) for the decision to save. Likewise, if you decide not to save, the motive of care for the stranger's life had better be operative. The decision had better be a conflicted one. Only for a vicious person would the decision not to save be a shoo-in.

Friday, August 15, 2014

Deciding to marry

Consider this line of thought.

  1. To decide to marry y for no reason is unreasonable.
  2. To decide to marry y on the grounds that y will make one happy is selfish.
  3. To decide to marry y on the grounds that one will make y happy is arrogant.
  4. To decide to marry y on the grounds that it will make the world a better place is to be full of oneself.
  5. To decide to marry y on the grounds that God is calling one to it is unavailable to atheists and claims an implausibly good understanding of God's designs.
So on what grounds can one decide to marry y?

I am inclined to think that (5) can be rejected: I think atheists can do something because God calls them to it, but they won't formulate the reason in that way. I also think we can have some understanding of what God calls us to. And while (3) and (4) may be true, perhaps all one needs as grounds is the thought that likely one will in a small way contribute to y's happiness and make the world a better place, and one need not be arrogant to think that.

Still, I think that even with these qualifications, something important is being left out of the story. Love, I guess. Love directly justifies the pursuit of a union appropriate to the love, and marriage is the union appropriate to certain forms of love.

Thursday, August 14, 2014

Against the viability of the a priori

  1. If the notion of the a priori is viable, there is a recursive logical system S, whose soundness is a priori, such that all a priori mathematical truths are provable in S.
  2. If the notion of the a priori is viable, then the truth of Peano Arithmetic (PA) is a priori.
  3. If the notion of the a priori is viable, then it is a priori that whatever is true is consistent in every sound logical system.
Assume the notion of the a priori is viable. By (1) and (2), PA is provable in S. By (3), the consistency in S of PA is a priori, and hence the consistency of S of PA is provable in S by (1). These conclusions contradict the soundness of S by Goedel's Second Incompeteness Theorem.

There are two ways of taking the above argument. One can take it as concerning the a priori as such, or the a priori for humans. Either way, the premises are plausible.

I think the main controversy is going to be about (1). To deny (1) would be to hold that there are infinitely many mathematical truths, not a priori reducible to a finite number of a priori assumptions, but that are nonetheless a priori. This is particularly weird if the a priori is a priori for us: Do we really have some mysterious inner capacity to cognize irreducibly infinitely many mathematical truths a priori? But even if the a priori is not relativized to humans, it's weird. In just what sense are all these mathematical claims a priori?

Let me sharpen the last point. We can restrict our attention to those mathematical truths that are formulable in our mathematical vocabulary, since these are the only ones that come up in the argument. But the a priori truths formulable in a given vocabulary seem to be basically the analytic ones. Are there really infinitely many independent analytic truths formulable in our mathematical vocabulary? Are we that rich that so much is implicit in what we say?

I think this does serious damage to the Chalmers project.

Wednesday, August 13, 2014

Two models of mathematics

I've been thinking lately about high-level parallels between three activities I engage in on a regular basis: philosophy, mathematics and computer programming. One of the obvious things that all of these have in common is that abstraction has a role, though what role it has differs between the three and within each, depending on what one is doing.

One model of mathematics is the abstractive model. "Aristotelian" is a label that comes to mind. Natural numbers abstract from pluralities, ratios abstract from pairs of natural numbers, fields abstract from arithmetic, morphisms abstract from homomorphisms, categories abstract from just about everything. There is an old category theory joke that mathematicians are so absent-minded because mathematics is all about the application of forgetful functors. Taking the joke's thesis literally probably isn't going to give an adequate picture of, say, the number theorist or harmonic analyst are doing, but the basic picture is clear: mathematicians abstract away, or forget, structure. Pluralities have all sorts of structure: these four cows have legs, spots, give milk, and have one dominant cow, but we forget about everything but the "four" to get numbers. When we go from pairs to ratio, we forget everything but the multiplicative relationship. And so on up.

The other model of mathematics is the constructive model (using the term very loosely, and without a commitment to constructivism). Here, we build our way up to more complex structures out of simpler ones. The most impressive example is just how much of mathematics can be seen as about sets and membership.

The abstractive model makes one think of very high level programming languages, of paradigms like functional programming or object-oriented programming. The constructive model makes one think of assembly language programming (some will lump C in here, and I guess that's not unfair). The models can be thought of as models of practice. If so, then they are complementary. Both with computers and mathematics, we need both the abstractive and the constructive practices. We need Java and assembly programmers; we need category theorists and real analysts. Different solutions are appropriate to different problems. You are unlikely to do a lot of abstraction when producing hand-optimized code for an 8-bit microcontroller, but you won't want to produce a banking system in this way. The abstractive approach ("abstract nonsense", as it is fondly called) is just the right thing for many (but not all) problems in algebra, but a constructive approach is likely to be the better solution for many (but not all) problems in harmonic analysis. And of course really good work can involve both, and a well-rounded programmer and mathematician can work in both modalities.

But besides thinking of the two models as models of practice, one can think of them as models of ontology. There is an abstractive ontology of mathematics. Mathematical facts are simply abstractions from facts about concrete things. And there is a constructive ontology, the most prominent starting with sets and showing how the mathematics we care about can be made to be about sets.

With computers, it's clear that the analogue to the constructive ontology is simply the truth of the matter. Computers are built up of transistors, the transistors run microcode, the microcode runs higher level machine code, and all the way up to Java, Haskell and the like. That elegant Haskell one-liner will eventually need to be implemented with microcode. And we have tools to bridge all the levels of abstraction. The proponent of the set-theoretic ontology in mathematics says that the analogy here is perfect. But that is far from clear. We need to take seriously the idea that mathematical abstractions are not built up out of simpler mathematical entities, but are either fundamental or are grounded in non-mathematical entities and pluralities (and their powers, I will say).

Nothing new here. Just laying out the territory for myself.

Monday, August 11, 2014

Nonpropositional views of conditionals, and lying

  1. Some conditional claims are lies. ("If you show this car to any mechanic in town, he'll tell you it's a great deal.")
  2. Conditional claims do not express propositions (but, say, conditional probabilities).
  3. Assertions express propositions.
  4. So, not all lies are assertions.

Of course, (2) is a quite controversial theory of conditionals. And one can turn the argument around: All lies are assertions, so conditional claims express propositions (at least in those cases; but one can generalize from them). But if one thinks that the argument should be turned around in this way, then one must make the same move for every non-cognitivist theory, since it takes only one non-cognitivist theory in whose domain lies can be made to yield conclusion (4). For instance, one must reject non-cognitivism in metaethics and aesthetics. So far so good. One probably doesn't need to reject non-cognitivism about requests, on the other hand, since one doesn't lie with requests: "I'll have fries with that" isn't a lie when you don't want fries.

Are there domains where one can lie but where non-cognitivism is clearly right? I am not sure. Maybe something like talk of the spooky? One certainly can lie in something is spooky (e.g., to scare off a competing house buyer). But even there I am not convinced that the right conclusion is that not all lies are assertions. The right conclusion there may still be that we do in fact make assertions when we attribute spookiness.

Vagueness cases are another case to think about. I think propositions are always sharp, but we lie with claims that clearly do not express sharp propositions ("He was bald two days ago, but washed his hair with this shampoo, and now he's not"). But I don't think vagueness cases give one reason to accept (4). Rather, they lead to a refinement of (3): assertions don't express individual propositions, but something like a vague assignment of weights to propositions.

So overall, I don't know what to make of the argument.

Tuesday, August 5, 2014

Leibniz and the Gaifman-Hales Theorem

The basic idea behind Leibniz's characteristique is that all concepts are generated out of simple concepts, and there are no non-trivial logical relations between the simple concepts.

Today I was thinking about how to model this mathematically. Concepts presumably form a Boolean algebra. But infinities are very important to Leibniz. For instance, to each individual there corresponds a complete individual concept, which is an infinite concept specifying everything the individual does. So an ordinary Boolean algebra with binary conjunction and disjunction won't be good enough for Leibniz. We need concepts to form a complete Boolean algebra, one where an arbitrary set of elements has a conjunction and a disjunction.

So we want the space of concepts to be a complete Boolean algebra. We also want it to be generated by—built up out of—the set of simple concepts. Finally, we don't want there to be any nontrivial logical relations between the simple concepts. We want the theory to be entirely formal. This is one of Leibniz's basic intuitions. It seems to me that the way to formalize this condition is to say that the complete Boolean algebra is freely generated by the simple concepts.

Pity, though. The Gaifman-Hales Theorem implies that if there are infinitely many simple concepts, there is no complete Boolean algebra generated by them (this assumes a quite weak version of the Axiom of Choice, namely that every infinite set contains a countably infinite subset).

It looks, thus, like the Leibniz project is provably a failure.

Perhaps not, though. Apparently if one relaxes the requirement that the complete Boolean algebra be a set and allows it to be a proper class, but keeps the idea that the simple concepts form a set, one can get a complete Boolean algebra freely generated by the simple concepts.

Still, it's interesting that from an infinite set of simple concepts, one generates a proper class of concepts.

Monday, August 4, 2014

Current task

In case anybody is interested, these days I'm working on a book with Josh Rasmussen arguing there is at least one necessary being (definition: an entity that is concrete and exists necessarily; an entity is concrete if and only if it is possibly a cause), based in part on the arguments on necessarybeing.net. We've got about 40000 words so far:

$ wc -w chapter?.tex
3019 chapter1.tex
7945 chapter2.tex
12295 chapter3.tex
9255 chapter4.tex
435 chapter5.tex
6676 chapter7.tex
4547 chapter8.tex
44172 total

I'm in the middle of chapter 8, which is a variant of my Goedelian ontological argument, with a special focus on the negative formulation.

A practical liar paradox in two words

I saw a woman with a tattoo that said only: CAVEAT LECTOR.

Saturday, August 2, 2014

Randomness and freedom

Consider cases where your decision is counterfactually dependent on some factor X that is not a part of your reasons and is outside of your (present or past) rational control. The kind of dependence that interests me is this:

  • In the presence of X, you decided on A, but had X been absent, you would have decided on B on the basis of the same set of reasons.
It's important here that X isn't just an enabler of your making a decision, nor is it one of the reasons—your reasons are the same whether X is present or not—but is an extrarational difference-maker for your action.

As far as rationality is concerned, these are cases of randomness. It doesn't matter whether X's influence is deterministic or not: the cases are random vis-à-vis reason.

In these cases, the best contrastive explanation of your decision is in terms of your reasons and X. And the counterfactual dependence on X, which is outside of your control, puts your freedom into question.

I think many cases of conflicted decisions have the following property:

  1. If determinism is true, then the case involves such counterfactual dependence on a factor outside of one's reasons and rational control.
But I also think that:
  1. Some of these cases are also cases of responsibility.
It follows that:
  1. Responsibility is compatible with such counterfactual dependence
or:
  1. Determinism is false.
If (3) is true, then a fortiori the kind of causal undeterdetermination that is posited by event-causal libertarians does not challenge freedom.

I think the right conclusion to draw is (4). I think the counterfactual dependence here does indeed remove freedom. But I do not think the mere absence of a determiner like X is enough for freedom. Something needs to be put in the place of X. What? The agent! The problem with X is that it usurps the place of the agent. Thus I am inclined to think that freedom requires agent causation. I didn't see this until now.

Friday, August 1, 2014

Apart from Christ there is no hope

The deep realization that Christ is our only hope has significant existential force for a lot of people in motivating Christian faith. It is interesting that there need be nothing irrational here. In fact, a clearly valid argument can be given:

  1. Apart from Christ there is no hope.
  2. There is hope.
  3. If there is hope but apart from x there is no hope, then there is hope with x.
  4. If there is hope with Christ, the central doctrines of Christianity are true.
  5. So, there is hope with Christ. (1-3)
  6. So, the central doctrines of Christianity are true. (4-5)

Clichéd as that sounds, premise (1) really is something that I come to realize more and more deeply the longer I live. (See also this book by one of my distinguished colleagues.) Premise (3) is some kind of "logical truth'. Premise (4) would, I think, take some defending. I think the central thought here is something like the idea that Christ is Lord, liar or lunatic, and in the latter two cases there is no hope with Christ.

Premise (2) is the crucial one. I suspect that accepting (2) in the relevant deep existential sense of "There is hope" usually, perhaps always, is a fruit of grace. There is darkness, but one sees that there is light shining in it even if one cannot identify the light.

One can perhaps, though this very rare, argue oneself by the light of natural reason into accepting that the central doctrines of Christianity are true. But without grace one cannot argue oneself into accepting the central doctrines of Christianity (it is one thing to think something is true and another to accept it; I think the latest theorem proved by my colleagues at the Mathematics Department is true, but I don't accept it—if only because I don't know what it is!), much less into having faith in them.

It may be that for some people the point where grace enters the process of gaining faith is precisely at premise (2). If grace enters the process at accpetance of (2), this is quite interesting. For (2) is not overtly Christian. Yet when someone comes to faith in this way, with grace entering the process in conjunction with an existentially rich acceptance of (2), it plausibly follows that at that stage they already have faith. For, plausibly, there is no way to get to faith from something that isn't faith without grace. So that means that a deep existential acceptance of (2) (and that's not just a light and breezy optimism) could itself be faith.

The reflections on grace and faith are simply speculation. But the argument I stand by.