Wednesday, June 29, 2016

Thomson's Lamp and change

Start with two Thomson's Lamps. They each have toggle switches and are on at 10 a.m. The switches are toggled at 10:30, 10:45, 10:52.5 and so on. Now suppose, as is surely possible, that regardless of what, if any, state the lamps would have had at 11 a.m., aliens come and instantaneously force the first lamp to be on at 11, and force the second to be off (say, by breaking the bulb!).

Now some but not all causal interactions are changes. Which lamp's on/off state did the aliens change? There are four possible answers:

  1. The first but not the second
  2. The second but not the first
  3. Both
  4. Neither.

Symmetry considerations rule (1) and (2) out of court. Can we say that in both cases, the aliens changed the on/off state of the lamp? Surely not. For if something can have only two states, it can't be that each of the two possible state inductions counts as a change. Moreover, if the lamp were to have changed state, what state did it change from? For inducing an on state only counts as a change if the induction starts with the lamp in an off state, and vice versa. But the induction didn't start with the lamp on, nor did it start with the lamp off. That leaves only last option: Neither.

But it seems that if an object has been persisting, and a causal interaction induced a state in that objection, that causal interaction either was a state-change or a state-maintenance. So if in neither case did the aliens change the state of the lamp, then it seems that in both cases they maintained the state. But we get analogues of (1)-(4), and analogues of the above arguments also lead to the conclusion that in neither case did the aliens maintain the state.

So the Thomson's Lamp story forces us to reject the dichotomy between state-change and state-maintenance.

Here's another curious thing. It seems that the following is true:

  1. If an object has state A at t1 and non-A at t2, then the object's having state non-A at t2 is the result of a change.
But applying (5) shows that both lamps' final states are the results of a change. But that change must have thus been from the opposite state. And yet the final state doesn't follow right after the opposite state.

One could use this as an argument against the possibility of infinitely subdivided time. Alternately, one could use this as an argument against principles like (5) and the idea that the concepts of change and maintenance are as widely applicable as we thought them to be.

IRToWebThingy

For quite some time, my older daughter has wanted me to make her Great Wolf Lodge Magiquest wand do something at home. So I made a simple IRToWebThingy (the link gives build instructions). It takes infrared signals in many different infrared remote controls and makes them available over WiFi. As a result, we can now watch Netflix with a laptop and a projector and the ceiling and play/pause with the Magiquest wand (and adjust volume with our DVD remote) with a pretty simple python script. My son (with some help) made a 3D etch-a-sketch script that lets him draw in Minecraft with the DVD remote. I made a fun script that lets you fly a pig in Minecraft with a Syma S107 helicopter remote (see photo). You can even control rooted Android devices with IR remotes and shell scripts.

Knowledge and scepticism

Some years back I learned that when epistemologists talk about the "problem of scepticism", they are thinking about theses that deny that we have much knowledge about, say, the external world. When I found out this, I was disappointed. I thought that the problem that epistemologists were dealing with was the problem whether we can be reasonable in believing things about the external world. I care about that, not about whether we know. Suppose it were to turn out that knowledge has to be infallible, and so indeed we know nothing empirical about the external world. This would hardly bother me. It's too bad that I couldn't say that I know that I have two hands, but my rational confidence that I have two hands would be just as high as it ever was.

We sell the scepticism at the beginning of Descartes' Meditations and throughout Sextus Empiricus far short if we think about it as simply a denial of knowledge. If we take seriously Descartes' invocation of the possibility of dreaming or of a deceitful demon, that not only has a tendency to undercut the claim that we know, but even the claim that we are rational in being confident in ordinary empirical beliefs, and maybe even the claim that it is rational to hold such beliefs to be more likely true than not. This is the more radical and more interesting scepticism: it questions whether we have any non-circular reason to think we aren't being deceived. (And in the end I think Descartes' answer to it is pretty good.)

The problem of induction in mathematics

Let's say I have some algorithm that generates the sequence of numbers, and I run ten iterations on the computer and get

  • 1
  • 1.5
  • 1.41666666667
  • 1.41421568627
  • 1.41421356237
  • 1.41421356237
  • 1.41421356237
  • 1.41421356237
  • 1.41421356237
  • 1.41421356237
  • 1.41421356237

I will now be very confident that the sequence of numbers converges, and indeed that it converges to the square root of two. But why? Convergence is a property that the sequence has at infinity. The first 10 items in the sequence are an infinitely short proportion of infinity. Moreover, why do I assume that the sequence converges to the square root of two. Maybe it converges to the square root of two plus e−100. Such possibilities ensure that my credence that the limiting value is the square root of two is strictly less than one. But the credence stays high.

In other words, the standard problems of induction come up not in just in science, but in mathematics. We should, thus, hope that whatever solutions we adopt to the problems as they come up in science will apply in the mathematical cases as well.

My favorite story about induction, the theistic story that God would have good reason--and hence be not unlikely to--create a well-ordered universe does not apply to mathematics, since pace Descartes, God doesn't choose the truths of mathematics. A relative of the story does apply, however. The truths of mathematics are grounded in the necessary nature of the mind of God, as Augustine held, and it is to be expected that the necessary nature of the mind of God will exhibit beauty and elegance. And where there is beauty and elegance, there is pattern and some purchase for induction.

Emotivism about art

Being largely tone-deaf, I don't appreciate music much at all. In fact, the older I get, the more music tends to annoy me, particularly when I meet it in the background of daily activities. I would much prefer listening to 4'33'' than to Mozart or Beethoven.

Yet I know that Mozart and Beethoven are much more beautiful music. Hence I embody a counterexample to emotivism about art. It is fun to be a counterexample (but I realize that it would be more fun to appreciate music, and I hope to do so in heaven).

Sunday, June 26, 2016

Teaching programming with Python and Minecraft

Last summer, I taught programming to gifted kids with Python and Minecraft. Here's my Instructable giving a curriculum for a course like that.

Friday, June 24, 2016

Teaching programming with AgentCubes

I made an Instructable giving the curriculum I used for a mini-course for gifted middle- and high-school kids teaching programming by making 3D games with AgentCubes.

Thursday, June 23, 2016

RaspberryJamMod for Minecraft/Forge 1.10

My RaspberryJamMod, which enables the Minecraft PI API for Python programming, has been updated to work with Minecraft/Forge 1.10. Still alpha-quality, but everything I've tried seems to work.

Tuesday, June 21, 2016

Cardinality paradoxes

Some people think it is absurd to say, as Cantorian mathematics does, that there are no more real numbers from 0 to 100 than from 0 to 1.

But there is a neat argument for this:

  1. If the number of points on a line segment that is 100 cm long equals the number of points on a line segment that is 1 cm long, then the number of real numbers from 0 to 100 equals the number of real numbers from 0 to 1.
  2. The number of points on a line that is 100 cm long equals the number of distances in centimeters between 0 and 100 cm.
  3. The number of points on a line that is 1 meter long equals the number of distances in meters between 0 and 1 meter.
  4. The number of distances in centimeters between 0 and 100 cm equals the number of real numbers between 0 and 100.
  5. The number of distances in meters between 0 and 1 meters equals the number of real numbers between 0 and 1.
  6. A line is 100 cm if and only if it is 1 meter long.
  7. Equality in number is transitive.
  8. So, the number of points on a line that is 100 cm is equals the number of points on a line that is 1 meter long.
  9. So, the number of distances in centimeters between 0 and 100 cm equals the number of distances in meters between 0 and 1 meters.
  10. So, the number of real numbers between 0 and 100 equals the number of real numbers between 0 and 1.

Monday, June 20, 2016

Finitism and mathematics

Finitists say that it is impossible to actually have an infinite number of things. Now, either it is logically contradictory that there is an infinite number of things or not. If it is not logically contradictory, then the finitist's absurdity-based arguments are seriously weakened. If it is logically contradictory, on the other hand, then standard mathematics is contradictory, since standard mathematics can prove that there are infinitely many things (e.g., primes). But from the contradictory everything follows ("explosion", logicians call it). So in standard mathematics everything is true, and standard mathematics breaks down.

I suppose the finitist's best bet is to say that an infinite number of things is not logically contradictory, but metaphysically impossible. That requires a careful toning down of some of the arguments for finitism, though.

Thursday, June 16, 2016

Measure of confirmation

Suppose I played a lottery that involved my picking ten digits. The digits I picked are: 7509994361. At this point, my probability that I won, assuming I had to get every digit right to win, is 10−10. Suppose now that I am listening to the radio and the winning number's digits are announced: 7, 5, 0, 9, 9, 9, 4, 3, 6 and 1. My sequence of probabilities that I am the winner after hearing the successive digits will approximately be: 10−9, 10−8, 10−7, 10−6, 10−5, 10−4, 0.001, 0.01, 0.1 and approximately 1.

If we think that to get a significant amount of evidential confirmation for a hypothesis we need to give a significant absolute increment to the probability, only the final two or three digits gave significant confirmation to the hypothesis that I won, as the last digit increased my probability by 0.9, the one before by 0.09, and the one before by 0.009. But it seems quite wrong to me to say that as I am listening to the digits being announced one by one, I get no significant evidence until the last two or three digits. In fact, I think that each digit I hear gives me an equal amount of evidence for the hypothesis that I won.

Here's an argument for why it's wrong to say that a significant absolute increment of probability is needed to get significant confirmation. Let A be the collective evidence of learning digits 1 through 7 and let B be the evidence of learning digits 8 through 10. Then on the absolute increment view, A provides me with no significant confirmation that I won, but when I learn B after learning A, then B provides me with significant confirmation. However, had I simply learned B, without learning A first, that wouldn't have afforded me significant confirmation--my probability of winning would still have been 10−7. So the combination of two pieces of evidence, both insignificant on their own, gives me conclusive evidence.

There is nothing absurd about this on its own. Sometimes two insignificant pieces of evidence combine to conclusive evidence. Learning (C) that the winning number is written on a piece of paper is insignificant on its own; learning (D) that 7509994361 is written on the piece of paper is insignificant on its own; but combined they are conclusive. Yes: but that is a special case, where the two pieces of evidence fit together in a special way. We can say something about how they fit together: they fail to be statistically independent conditionally on the hypothesis I am considering. Conditionally on 7509994361 being the winning number, the event that 7509994361 is what is written on the paper and the event that what is written on the paper is the winning number are far from independent. But in my previous example, A and B are independent conditionally on the hypothesis I am considering, as well as being independent conditionally on the negation of that hypothesis. They aren't pieces of evidence that interlock like C and D are.

Wednesday, June 15, 2016

Temporal nonlocality of consciousness

We are incapable of having a pain that lasts only a nanosecond (or a picosecond or, to be really on the safe side, a Planck time). Anything that is that short simply wouldn't consciously register, and an unconscious pain is a contradiction in terms. But now suppose that I have a constant pain for a minute. And then imagine a human being who lives only for a nanosecond, and whose states during that nanosecond are exactly the same as my states during some nanosecond during my minute of pain. Such a person wouldn't ever be conscious of pain (or anything else), because events that are that short don't consciously register.

Say that a property is temporally punctual provided that whether an object has that property at a given time depends at most on how the object is at that time. For instance, arguably, being curved is temporally punctual while being in motion is not. Say that a property is temporally nanoscale provided that whether an object has that property at a time t depends at most on how the object is on some interval of time extending a nanosecond before and after t. Any temporally punctual property is temporally nanoscale as well. An example of a non-nanoscale property is having been seated for an hour. The reflections of the first paragraph make it very plausible that our conscious experiences are neither temporally punctual nor temporally nanoscale. Whether I am now in pain at t depends on more than just the nanosecond before and after t. There is a temporal non-locality to our consciousness.

This I think makes very plausible a curious disjunction. At least one of these two claims is true:

  1. My conscious states are non-fundamental.
  2. The time sequence along which my conscious states occur is not the time sequence of physical reality.
For, plausibly, all fundamental states that happen along the time sequence of physical reality are nanoscale (and maybe even temporally punctual). Option (1) is friendly to naturalism. Option (2) is incompatible with naturalism.

Some non-naturalists think that conscious states are fundamental. If I am right, then they must accept (2). And accepting (2) is difficult given the A-theory of time on which there is a single pervasive time sequence in reality. So there is an argument here that if conscious states are fundamental, then probably the A-theory is false. There may be some other consequences. In any case, I find it very interesting that our conscious states are not nanoscale.

Tuesday, June 14, 2016

Naturalism and second-order experiences

My colleague Todd Buras has inspired this argument:

  1. A veridical experience of an event is caused by the event.
  2. Sometimes a human being is veridically experiencing that she has some experience E at all the times at which E is occurring.
  3. If (1) and (2), then there is intra-mental simultaneous event causation.
  4. If naturalism about the human mind is true, there is no intra-mental simultaneous causation.
  5. So, naturalism about the human mind is not true.
In this argument, (4) is a posteriori: if naturalism is true, mental activity occurs at best at the speed of light. I am sceptical about (2) myself.

Conceiving people who will be mistreated

Alice and Bob are Elbonians, a despised genetic minority. It seems that unless the level of mistreatment that members of this minority suffer is extreme, it is permissible for Alice and Bob to have a child.

But now suppose that Carol and Dan are not Elbonian, but have a child through a procedure that ensures that the child is Elbonian. It seems that Alice and Bob's procreation is permissible, but Carol and Dan are doing something wrong. Yet in both cases they are producing a child that will be, we may suppose, the subject of the same mistreatment.

We understand, of course, Alice and Bob's intentions: they want to have a child, and as it happens their child will be Elbonian. But we have a harder time understanding what Carol and Dan are doing. Are they trying to make their child be the subject of discrimination? If so, then it's clear why they are acting wrongly. But we can suppose that both couples are motivated in the same way. Perhaps both couples really like the way that Elbonian eyes look, and that is why Alice and Bob do not seek out genetic treatment to root out the Elbonian genes while Carol and Dan seek treatment to impose these genes.

Thinking about this case makes me think that there is a significant difference between just letting nature take its course reproductively and deliberately modifying the course of reproduction. But there are a lot of hard questions here.

Some thoughts on the ethics of creating Artificial Intelligence

Suppose that it's possible for us to create genuinely intelligent computers. If we achieved genuine sapience in a computer, we would have the sorts of duties towards it that we have towards other persons. There are difficult questions about whether we would be likely to fulfill these duties. For instance, it would be wrong to permanently shut off such a computer for anything less than the sorts of very grave reasons to make it permissible to disconnect a human being from life support (I think here about the distinction between ordinary and extraordinary means in the Catholic tradition). Since keeping such a computer running is not likely to typically involve such reasons, it seems that we would likely have to keep such a computer running indefinitely. But would we be likely to do so? So that's part of one set of questions: Can we expect to treat such a computer with the respect due to a person, and, if not, do we have the moral right to create it?

Here's another line of thought. If we were going to make a computer that is a person, we would do so by a gradual series of steps that produce a series of systems that are more and more person-like. Along with this gradual series of steps would come a gradation in moral duties towards the system. It seems likely that progress along the road to intelligence would involve many failures. So we have a second set of questions: Do we have the moral right to create systems that are nearly persons but that are likely to suffer from a multitude of failures, and are we likely to treat these systems in the morally appropriate way?

On the other hand, we (except the small number of anti-natalists) think it is permissible to bring human beings into existence even though we know that any human being brought into the world will be mistreated by others on many occasions in her life, and will suffer from disease and disability. I feel, however, that the cases are not parallel, but I am not clear on exactly what is going on here. I think humans have something like a basic permission to engage in procreation, with some reasonable limitations.

Simple chase AI via heat propagation

I've been teaching programming to gifted kids using AgentCubesOnline. Consistently, kids want to make games where scary things chase you. But how to make a simple algorithm for the scary things to come to you? The most naive algorithm is to have to go in whatever direction the player is, but that doesn't work with obstacles. This morning I came up with a very simple solution that is super-easy to implement in AgentCubes: use heat propagation. Basically, set the maximum heat value at the player, set zero heat at obstacles, let the heat propagate at some fixed rate (e.g., by averaging the heat at a cell with the heat at neighboring cells every fixed amount of time; you can adjust the rate by changing the time-step and the weighting), and have the monsters always go in the direction of increasing heat.

A particularly nice artifact of this approach is that as the player character navigates the board, it leaves a heat trail, and so instead of making a bee-line for you, the monsters exhibit a behavior that is a combination of following you and making a bee-line for you. The monsters' following you, while at the same time taking occasional shortcuts, provides a pretty good illusion of intentionality on their part. Try it here (use arrow keys to move the ladybug).

I also got a nice bonus from what started out as a bug: the monsters have zero heat, which means that they can't get stuck in a local maximum as they will cool off that maximum.

Wednesday, June 8, 2016

Counterfactuals and the randomness objection to libertarianism

The randomness objection to libertarian free will says that undetermined choices could only be random rather than reason-governed.

I want to consider a bit of a reply to this. Suppose that you are choosing between A and B. You have a reason R for A and a reason S for B, and you freely end up choosing A. I think the following will be true, and I think the libertarian can say that they are true as well:

  1. If the reason R for A were stronger, you'd still have chosen A.
  2. If the reason S for B were weaker, you'd still have chosen A.
  3. If the reason R for A were noticeably weaker, you might not have chosen A.
  4. If the reason S for B were noticeably stronger, you might not have chosen B.
If this is right, then there is a real counterfactual dependence of your action on the reasons. The dependence isn't as precise as the compatibilist's dependence. The compatibilist's story may allow for precise values of strengths of reasons that produce counterfactuals like (3) and (4) with quantitative antecedents and "would" rather than "might". Still, I don't think anything so precise is needed for reasons-governance of our actions.

So, I think that if I am right that the libertarian can reasonably affirm (1)-(4), then the randomness objection fails. Of the four, I don't think there is any difficulty with (3) and (4): even if there were pure randomness, we would expect (3) and (4) to be true. So the question is: Can the libertarian affirm (1) and (2)? And I think (1) and (2) are in the same boat, so really the question is whether the libertarian can affirm (1)?

And I say: Why not? At the same time, I know that when I've talked with fellow libertarians about this, they've been pretty sceptical about counterfactuals like (1). Their main reason for scepticism was van Inwagen's re-run argument: In indeterministic free choice situations, if you repeated the same circumstances, you'd get different results. And you'd expect to get different results in repeat runs even if you somewhat raised the strength of the reasons for A.

I agree with the re-run intuition here, but I don't see it as incompatible with (1). The re-run intuition is about what we would get at a later time, albeit in a situation that is qualitatively the same. But (1) is about what would have happened at the time you made the original choice, albeit in a situation that was tweaked to favor A more.

Tuesday, June 7, 2016

Molinism and the Principal Principle

Molinism says that there are non-trivial conditionals of free will and that God providentially decides what to create on the basis of his knowledge of them. I shall assume, for simplicity of examples, that what Molinism says about free will it says about other non-determined events. The Principal Principle says that when you know for sure that the objective chance of some future event is r, the epistemic probability you should assign to that event is r.

Suppose a dictator has set up a machine what will flip an indeterministic and fair coin. On heads, the coin will trigger a bomb that will kill almost everyone on earth. On tails, it will do nothing. Since the coin is fair, the objective chance of heads is 1/2. But suppose you are sure that Molinism is true. Then you should think: "Likely, God would only allow this situation to happen if he knew that the coin flipped in these circumstances would land tails. So, probably, the coin will land tails." Maybe you aren't too convinced by this argument--maybe God would allow the coin to land tails and then miraculously stop the bomb or maybe God is fed up with 99% of humankind. But clearly taking into account your beliefs about God and Molinism will introduce some bias in favor of tails.

This seems to be a violation of the Principal Principle: the objective chance of heads is 1/2 but the rational epistemic probability of heads is at least a little less than 1/2.

Not so fast! The "objective chances" need to be understood carefully in cases where foreknowledge of the future is involved. An assumption behind the Principal Principle is that our evidence only includes information about the past and present. If we know that a true prophet prophesied tails, then our credence in tails should be high even if the coin is fair. Given Molinism, the fact that God allowed the coin toss to take place is information about the future, since it indicates that the coin is less likely to land heads given the disastrous consequences.

So, Molinism is compatible with the Principal Principle, but it renders the Principal Principle inapplicable in cases where it matters to God how a random process will go. But everything that matters matters to God. So the Principal Principle is inapplicable in cases where the outcomes of the random process matter, if Molinism is true. This renders the Principal Principle not very useful. Yet it seems that we need the Principal Principle to reason about the future, and we need it precisely in the cases that matter. So we have a bit of a problem for Molinists.

Monday, June 6, 2016

An argument that Trans-World Depravity is unlikely to be true

Assume Molinism. Plantinga's Trans-World Depravity (TWD) is the thesis that every feasible world--world compatible with the conditionals of free will--that contains at least one significantly free choice contains at least one sin. I want to think about an argument that TWD is likely false.

For consider a world where God creates exactly one intelligent creature with the typical motivations and character of a typical morally upright adult human being. God then forbids the creature from imposing pointless pain on itself, and only ever gives the creature only one significantly free choice: to eat a nutritious food that it likes or to endure five hours of torture. Let's imagine the situation where God creates such a creature and it's about to make that one significantly free choice. Call this circumstances C. Given what we know about decent human beings and their motivations, the creature would very likely eat the nutritious food rather than be tortured. Very well. So very likely the conditionals of free will are such that the world where the creature eats the nutritious food is feasible. But if that world is feasible, then TWD is false.

That was too quick. I jumped between answers to two different probabilistic questions:

  1. What is the epistemic probability of the Molinist conditional that were C to obtain, the creature would choose wrongly?
  2. Were C to obtain, what would be the chance of the creature choosing wrongly?
It is clear that the answer to (2) is "Very low." But to argue that TWD is very likely false, I have to say that the answer to (1) is also "very low". This leads to a difficult set of questions about the relationship between Molinist conditionals and chances. Lewis's principal principle does imply that if we were to knowingly (with certainty) find ourselves in C, and if we were certain of Molinism, we would have to give the same answer to (1) and (2). The argument goes as follows: given C the Molinist conditional has the same epistemic probability as its consequent, but the epistemic probability of its consequent is the same as its chance by the principal principle. But the answer we should give to (2) in those circumstances where we were knowingly in C may not be the same as answer as we should actually give to (2). Consider this possibility. Our current epistemic probability of the Molinist conditional in (1) is 1/2, but God would be very unlikely to make C obtain unless the conditional were false. He just wouldn't want to create a world where the creature would freely wrongfully choose to endure the torture. In that case, if we were to learn that C obtains, that would give us information that the Molinist conditional is very likely false. And hence the answer to (1) is "1/2", the answer to (2) is "Very low", but were C to obtain, the answer to (1) would be "Very low" as well.

Maybe. But I think things may be even less clear. For the biased sampling involved in God's choosing what to create on the basis of conditionals of free will undercuts the principal principle, I think. I think more work is needed to be done to figure out whether or not there is a good argument against TWD here or not.

Thursday, June 2, 2016

The value of communities

A men's lacrosse team has twice as many members as a basketball team. But that fact does not contribute to making a men's lacrosse team twice as valuable as a basketball team. Likewise, China as a country isn't about 500 times as valuable as Albania just because it is about 500 times as populous. This suggests that an otherwise plausible individualist theory about the value of a community is false: the theory that a community's value resides in the value it gives to individuals. For the kind of value that being on a basketball team confers to its players, being on a lacrosse team confers on twice as many; and the kind of value that being Albanian confers on its members, being Chinese confers on almost 500 times as many people. One possibility is to see the relevant goods as goods of instantiation: it is good that the values of there being a lacrosse team (or at least of a pair of lacrosse teams: a single team being pointless), there being a basketball team (or a pair of them), there being a China and there being an Albania be realized. But I think that isn't quite right. For while changing the rules of basketball to admit twice as many players to a team wouldn't automatically double the community good, doubling the number of basketball teams does seem to significantly increase the community goods by making there be twice as many basketball communities.

In fact, there seem to be three goods in the case of basketball: (a) the good of instantiation of there being basketball teams (and their playing); (b) the community good of each team; and (c) the good for each involved in these communities. Good (a) is unaffected by doubling the number of teams (unless we double from one to two, and thereby make playing possible); good (b) is doubled by doubling the number of teams; good (c) is doubled both by doubling the number of teams and by doubling the team size. Thinking about the behavior of (b) gives us good reason to think that this good does not reduce to the goods of the individuals as such.

But perhaps this reason isn't decisive. For maybe the goods of individuals can overlap, in the way that two Siamese twins seem to be able to share an organ (though the right ontology of shared organs may in the end undercut the analogy), and in such a case the goods shouldn't be counted twice even if they are had twice. For in these cases, perhaps, the numerically same good is had by two or more individuals. If you and I are both friends of John, and John flourishing, then John's flourishing contributes to your and my flourishing, but it doesn't contribute thrice over even though this flourishing is good for three--we should count overall value by goods and not by participants. Maybe. This would be a kind of middle position between the individualist and communitarian pictures of the value of community: there is a single good of type (b), but it is good by being participated in by individuals.

I don't know. I find this stuff deeply puzzling. I have strong ontological intuitions that communities don't really exist (except in a metaphorical way--which may well be importNt) that pull me towards individualist pictures, but then I see these puzzles...

Goods of instantiation

It's good that there have been someone with Albert Schweitzer's character, or a planet like Saturn. When we talk of the value of there being a thing of a particular sort, we're talking of the value of some property being--say, being a ringed gas giant of such and such appearance--being instantiated. Intuitively, that value of instantiation resides in the instantiated thing: it seems clear that Albert Schweitzer and Saturn have the goods of instantiating Schweitzerlikeness or Saturnlikeness.

But there is a problem with this obvious thing to say. For if the value of Saturnlikeness being instantiated is found in Saturn, then if there were a second planet, Shmaturn, that was Saturnlike then we would have that good twice over, once in each planet. But that misunderstands how goods of instantiation work. It is good that Saturnlikeness be instantiated. It isn't twice as good that it be instantiated twice.

So what do goods of instantiation reside in? One could answer: "Nothing. Goods don't need to have a substrate as a home." But I do have the Aristotelian intuition that goods have something like a metaphysical home, that it is good that p only if that is a good for something that p. Maybe I should abandon that intuition. But let's see what we can say given that intuition.

If Platonism is true, then a very natural answer is that the goods of instantiation are goods for the instantiated properties. This leads to a very interesting idea. Artists do good to the Platonic realm by instantiating it. God benefited kangarooness by creating kangaroos. This seems a bit crazy.

If theism is true, then perhaps goods of instantiation are actually good for God (in a non-internal way that is compatible with divine aseity). God has something supra-Schweitzerlike or supra-Saturnlike about him. It is good for God--it glorifies him--for there to be something Schweitzerlike or Saturnlike. This approach may not actually be that different from the Platonist one if properties are divine ideas.