I tend to be sceptical of appeals to value as option-set dependent as a means of defending person-affecting views, for the reason that we needn’t imagine outcomes as things that someone is able to choose to bring about, as opposed to just something that happens to be the case. If you imagine the possible outcomes this way, then you can’t appeal to option-set dependence to block the various arguments, since the outcomes are not options for anyone to realize. And if, say, it makes the outcome better if an additional happy person happens to exist without anyone making it so, then it is hard to see why it should be otherwise when someone brings about that the additional happy person exists. (Compare footnote 9 in this paper/report.)
Andreas_Mogensen
If we assume that natural risks are negligible, I would guess that this probably reduces to something like the question of what probability you put on extinction or existential catastrophe due to anthropogenic biorisk? Since biorisk is likely to leave much of the rest of Earthly life unscathed, it also hinges on what probability you assign to something like “human level intelligence” evolving anew. I find it reasonably plausible that a cumulative technological culture of the kind that characterizes human beings is unlikely to be a convergent evolutionary outcome (for the reasons given in Powell, Contingency and Convergence), and thus if human beings are wiped out, there is very little probability of similar traits emerging in other lineages. So human extinction due to a bioengineered pandemic strikes me as maybe the key scenario for the extinction of earth-originating intelligent life. Does that seem plausible?
I think my gut reaction is to judge extinction this century as at least as likely as lock-in, though a lot might depend on what’s meant by lock-in. But I also haven’t thought about this much!
I don’t remember 100%, but I think that Thomas and Pummer might both not be arguing for or articulating an axiological theory that ranks outcomes as better or worse, but rather a non-consequentialist theory of moral obligations/oughts. For my own part, I think views like that are a lot more plausible, but the view that it doesn’t make the outcome better to create additional happy lives seems to me very hard to defend.
Another reason you might have an upper bound is that the axioms of expected utility theory require your utility function to be bounded given the most natural generalization to the case of countably infinite gambles.
One reason you might believe in a difference in terms of tractability is the stickiness of extinction, and the lack of stickiness attaching to things like societal values. Here’s very roughly what I have in mind, running roughshod over certain caveats and the like.
The case where we go extinct seems highly stable, of course. Extinction is forever. If you believe some kind of ‘time of perils’ hypothesis, surviving through such a time should also result in a scenario where non-extinction is highly stable. And the case for longtermism arguably hinges considerably on such a time of perils hypothesis being true, as David argues.
By contrast, I think it’s natural to worry that efforts to alter values and institutions so as to beneficially effect the very long-run by nudging us closer to the very best possible outomces are far more vulnerable to wash-out. The key exception would be if you suppose that there will be some kind of lock-in event.
So does the case for focusing on better futures work hinge crucially, in your view, on assigning significant confidence to lock-in events occuring within the near-term?
I tend to think that the arguments against any theory of the good that encodes the intuition of neutrality are extremely strong. Here’s one that I think I owe to Teru Thomas (who may have got it from Tomi Francis?).
Imagine the following outcomes, A—D, where the columns are possible people, the numbers represent the welfare of each person when they exist, and # indicates non-existence.
A 5 −2 # #
B 5 # 2 2
C 5 2 2 #
D 5 −2 6 #I claim that if you think it’s neutral to make happy people, there’s a strong case that you should think that B isn’t better than A. In other words, it’s not better to prevent someone from coming to exist and enduring a life that’s not worth living if you simultaneously create two people with lives worth living. And that’s absurd. I also think it’s really hard to believe if you believe the other side of the asymmetry: that it’s bad to create people whose lives are overwhelmed by suffering.
Why is there pressure on you to accept that B isn’t better than A? Well, first off, it seems plausible that B and C are equally good, since they have the same number of people at the same welfare levels. So let’s assume this is so.
Now, if you accept utilitarianism for a fixed population, you should think that D is better than C, since all the same people exist in these outcomes, and there’s more total/average welfare. (I’m pretty sure you can support this kind of verdict on weaker assumptions if necessary.)
So let’s suppose, on this basis, that D is better than C. B and C are equally good. I assume it follows that D is better than B.
Suppose that B were better than A. Since D is better than B, it would follow that D is better than A as well. But we know this can’t be so, if it’s neutral to make happy people, because D and A differ only in the existence of an extra person who has a life worth living. The neutrality principle entails that D isn’t better than B. But it’s absurd to think that B isn’t better than A.
Arguments like this make me feel pretty confident that the intuition of neutrality is mistaken.
I wasn’t sure if it’s really useful to think about value being linear in resources on some views. If you have a fixed population and imagine increasing the resources they have available, I assume that the value of the outcome is a strictly concave function of the resource base. Doubling the population might double the value of the outcome, although it’s not clear that this constitutes a doubling of resources. And why should it matter if the relationship between value and resources is strictly concave? Isn’t the key question something like whether there are potentially realizable futures that are many orders of magnitude more valuable than the default or where we are now? Answering yes seems compatible with thinking that the function relating resources to value is strictly concave and asymptotes, so long as it asymptotes somewhere suitably high up on the scale of value.
Could you clarify what you mean by ‘converge’? One thing that seems somewhat tricky to square is believing that convergence is unlikely, but that value lock-in is likely. Should we understand convergence as involving agreement in views facilitated by broadly rational processes, or something along those lines, to be contrasted with general agreement in values that might be facilitated by irrational or arational forces, of the kind that might ensure uniformity of views following a lock-in scenario?
Once More, Without Feeling (Andreas Mogensen)
Desire-Fulfilment and Consciousness (Andreas Mogensen)
Digital Minds: Importance and Key Research Questions
Summary: The weight of suffering
How to Resist the Fading Qualia Argument (Andreas Mogensen)
Thanks, Richard! In some sense, I think I agree; as I say in the conclusion, I’m most inclined to think this is one of those cases where we’ve got a philosophical argument we don’t immediately know how to refute for a conclusion that we should nonetheless reject, and so we ought to infer that one of the premises must be false.
On the other hand, I think I’m most inclined to say that the problem lies in the fact that standard models using imprecise credences and their associated decisions rules have or exploit too little structure in terms of how they model our epistemic predicament, while thinking that it is nonetheless the case that our evidence fails to rule out probability functions that put sufficient probability mass on potential bad downstream effects and thereby make AMF come out worse in terms of maximizing expected value relative to that kind of probability function. I’m more inclined to identify the problem as being that the maximality rule gives probability functions of that kind too much of a say when it comes to determining permissibility. Other standard decision rules for imprecise credences argaubly suffer from similar issues. David Thorstad and I look a bit more in depth at decision rules that draw inspiration from voting theory and rely on some kind of measure on the set of admissible probability functions in our paper ‘Tough enough? Robust satisficing as a decision norm for long-term policy analysis’ but we weren’t especially sold on them.
Summary: Maximal Cluelessness (Andreas Mogensen)
Thanks, Michael. Yes, you’re right—in the bit you quote from at the start I’m assuming the bursts have some kind of duration rather than being extensionless. I think that probably got mangled in trying to compress everything!
The zero duration frame possibility is an interesting one—Some of Vasco’s comments below point in the same direction, I think. Is your thought that the problem is something like—If you have these isolated points of experience which have zero duration, then since there’s no experience there to which we can assign a non-zero objective duration, if you measure duration objectively, you count those experiences as nothing, whereas intuitively that’s a mistake—There’s an experience of pain there, after all. It’s got to count for something!
I think that’s an interesting objection and one I’ll have to think more about. My initial reaction is that perhaps it’s bound up with a general weirdness that attaches to things that have zero measure but (in some sense) still aren’t nothing? E.g., there’s something weird about probability zero events that are nonetheless genuinely possible, and taking account of events like that can lead to some weird interactions with otherwise plausible normative principles: e.g., it suggests a possible conflict between dominance and expected utility maximization (see Hajek, “Unexpected Expectations,” p. 556-7 for discussion).
Thanks! I think that makes sense. I discuss something slightly similar on pp. 21 − 22 in the paper (following the page numbers at the bottom), albeit just the idea that you should count discrete pain experiences in measuring the extensive magnitude of a pain experience, without any attempt to anchor this in a deeper theory of how experience unfolds in time.
Maybe one thing I’m still a bit unsure of here is the following. We could have a view on which time is fundamentally discrete, rather than continuous. There are physical atoms of time and how long something goes on for is a matter of how many such atoms it’s made up of. But, on its face, those atoms needn’t correspond to the ‘frames’ into which experiences are divided, since that kind of division among experiences may be understood as a high-level psychological fact. Similarly, the basic time atoms needn’t correspond to discrete steps in any physical computation, except insofar as we imagine fundamental physics as computational. Thus, experiential frames could be composed of different numbers of fundamental temporal atoms, and varying the hardware clock-speed could lead to the same physical computation being spread over more or fewer time atoms. This seems to give us some sense in which experiences and physical computation unfolds in time, albeit in discrete time. However, I took it you wanted to rule that out, and so probably I’ve misunderstood something about how you’re thinking about the relationship between the fundamental time atoms and computations/experiential frames, or I’ve just got totally the wrong picture?
Thanks for the question! There’s a lot more about how I arrive at this conception of subjective indistinguishability in the paper itself (section 4.2), but in terms of the analogy with your parody principle, notice that your definition of mathematical indistinguishability just says that there has to be a one-to-one mapping, whereas the proposed account of subjective indistinguishability says that there has to be such a mapping and the mapped pairs must always be pairwise indistinguishable to the subject. If I said that two ranges of numbers are mathematically indistinguishable if there’s a one-to-one mapping among them such that the numbers we map to one another are indistinguishable, that doesn’t sound too implausible and presumably doesn’t generate the counter-example you note? (Though it might turn on what we mean by saying that two numbers are ‘indistinguishable’!) If that’s right, then I don’t think my principle is challenged by the analogy with the parody principle you note.
I’m pretty sure that Broome gives an argument of this kind in Weighing Lives!