Thanks for the post. I’ve also been surprised how little this is discussed, even though the value of x-risk reduction is almost totally conditional to the answer to this question (the EV of the future conditional on human/progeny survival). Here are my big points to bring up re this issue, though some might be slight rephrasing of yours.
Interpersonal comparisons of utility canonically have two parts—a definition of utility, of which every sentient being is measured by. Then, to compare and sum utility, one must pick (subjective) weights for each sentient being, scale their utility by the weights, and add everything up. (u_1x_1+.....+u_nx_n). If we don’t agree on the weights, it’s possible that one person may think the future be in expectation positive while another thinks it will be negative even w/ perfect information of what the future will look like. It could be even harder to agree on the weights of sentient beings when we don’t even know what agents are going to be alive. We have obvious candidates for general rules about how to weight utility (brain size, pain receptors, etc.) but who knows how are conceptions of these things will change.
Basically repeating your last point in the chart but it’s really important so I’ll reiterate. Like everything else normative, there is no objective “0” line, no non-arbitrary point at which life is worth living. It is a decision we have to make. Moreover, I don’t see any agreement in this community on the specific point where life is worth living. It is pretty obvious that disagreement about this could flip the sign of the EV of the future.
“Alien Counterfactuals”. I actually mentioned this in a comment to a previous post where someone said we should mostly just call longtermism x-risk(extremely wrong in my opinion). First, for simplicity, let’s just assume humans become grabby. If we become grabby, a question of specific interest to us should be, what characteristics do our society and species have relative to other grabby societies/species? Are we going to be better or worse gatekeepers of the future than the other gatekeepers of the future? I’m pretty sure we should take the prior that we display the mean characteristics of a grabby civilization (interested in hearing if others disagree). If this is the case, then, again for simplicity, assuming(for simplicity) that our lightcone will be populated by aliens whether or not we specifically become grabby, x-risk reduction could be argued to have exactly 0 expected value, as we have no reason to believe that we are going to do a better job with the future than aliens. Evidence updating against the prior would probably take the form of arguments about why our specific evolutionary or economic history was a weird way to become grabby, not an easy task. Of course even with all the simplifying assumptions I’ve made, it’s not so simple. Even if we have the mean characteristics of all the other grabby civilizations, adding more civilizations to the mix can change the game theory of space wars and governance. Still, it’s not clear if more or less players is better. I talked to a few people in EA about ‘alien counterfactuals’, and they all seemed to dismiss the argument, thinking that humans are better than “rolling the dice” on a new grabby civilization. No one provided arguments that were super convincing though. The most convincing counter argument I heard was that it is very unlikely that grabby aliens will actually end up existing in our lightcone, subverting the whole argument. AI makes this argument significantly more confusing but it’s not worth getting into without further ironing out of the initial arguments.
And then this is sort of the whole point of your post but I will reiterate—predicting the future is extremely difficult. We should have very little confidence in what it will be like. Predicting whether the future will be good or bad (given that we have already ironed out the normative considerations, which we haven’t) is probably easier than predicting the future but still seems really difficult. The burden of evidence is on us to prove the future will be good, not on other people to prove it will be bad. After all, we are pumping huge amounts of money into creating impact which is completely conditional on this information. I’ve found posts like this one to be the only type of things that even feel tractable, and if that is the level of specificity we are at, it truly does feel like we have been p. wagered on this issue. posts like this one that you mentioned ultimately don’t have nearly enough firepower to serve as anything more than an exploration of what a full argument would look like.
Thanks for the post. I’ve also been surprised how little this is discussed, even though the value of x-risk reduction is almost totally conditional to the answer to this question (the EV of the future conditional on human/progeny survival). Here are my big points to bring up re this issue, though some might be slight rephrasing of yours.
Interpersonal comparisons of utility canonically have two parts—a definition of utility, of which every sentient being is measured by. Then, to compare and sum utility, one must pick (subjective) weights for each sentient being, scale their utility by the weights, and add everything up. (u_1x_1+.....+u_nx_n). If we don’t agree on the weights, it’s possible that one person may think the future be in expectation positive while another thinks it will be negative even w/ perfect information of what the future will look like. It could be even harder to agree on the weights of sentient beings when we don’t even know what agents are going to be alive. We have obvious candidates for general rules about how to weight utility (brain size, pain receptors, etc.) but who knows how are conceptions of these things will change.
Basically repeating your last point in the chart but it’s really important so I’ll reiterate. Like everything else normative, there is no objective “0” line, no non-arbitrary point at which life is worth living. It is a decision we have to make. Moreover, I don’t see any agreement in this community on the specific point where life is worth living. It is pretty obvious that disagreement about this could flip the sign of the EV of the future.
“Alien Counterfactuals”. I actually mentioned this in a comment to a previous post where someone said we should mostly just call longtermism x-risk(extremely wrong in my opinion). First, for simplicity, let’s just assume humans become grabby. If we become grabby, a question of specific interest to us should be, what characteristics do our society and species have relative to other grabby societies/species? Are we going to be better or worse gatekeepers of the future than the other gatekeepers of the future? I’m pretty sure we should take the prior that we display the mean characteristics of a grabby civilization (interested in hearing if others disagree). If this is the case, then, again for simplicity, assuming(for simplicity) that our lightcone will be populated by aliens whether or not we specifically become grabby, x-risk reduction could be argued to have exactly 0 expected value, as we have no reason to believe that we are going to do a better job with the future than aliens. Evidence updating against the prior would probably take the form of arguments about why our specific evolutionary or economic history was a weird way to become grabby, not an easy task. Of course even with all the simplifying assumptions I’ve made, it’s not so simple. Even if we have the mean characteristics of all the other grabby civilizations, adding more civilizations to the mix can change the game theory of space wars and governance. Still, it’s not clear if more or less players is better. I talked to a few people in EA about ‘alien counterfactuals’, and they all seemed to dismiss the argument, thinking that humans are better than “rolling the dice” on a new grabby civilization. No one provided arguments that were super convincing though. The most convincing counter argument I heard was that it is very unlikely that grabby aliens will actually end up existing in our lightcone, subverting the whole argument. AI makes this argument significantly more confusing but it’s not worth getting into without further ironing out of the initial arguments.
And then this is sort of the whole point of your post but I will reiterate—predicting the future is extremely difficult. We should have very little confidence in what it will be like. Predicting whether the future will be good or bad (given that we have already ironed out the normative considerations, which we haven’t) is probably easier than predicting the future but still seems really difficult. The burden of evidence is on us to prove the future will be good, not on other people to prove it will be bad. After all, we are pumping huge amounts of money into creating impact which is completely conditional on this information. I’ve found posts like this one to be the only type of things that even feel tractable, and if that is the level of specificity we are at, it truly does feel like we have been p. wagered on this issue. posts like this one that you mentioned ultimately don’t have nearly enough firepower to serve as anything more than an exploration of what a full argument would look like.