Another reason to think that MacAskill’s method of determining the prior is flawed that I forgot to write down:
If one uses the same approach to come up with a prior that the second, third, fourth, X century is the hingiest century of the future, and then adds these priors together one ought to get 100%. This is true because exactly one of the set of all future centuries must be the hingiest century of the future. Yet with MacAskill’s method of determining the priors, when one sums all the individual priors that the hingiest century is century X, one gets a number far greater than 100%. That is, MacAskill’s estimate is that there are 1 million expected centuries ahead, so he uses a prior of 1 in 1 million that the first century is the hingiest (before the arbitrary 10x adjustment). However, his model assumes that it’s possible that civilization could last as long as 10 billion centuries (1 trillion years). So what is his prior that e.g. the 2 billionth century is the hingiest? 1 in 1 million also? Surely this isn’t reasonable, for if one uses a prior of 1 in 1 million for all 10 billion possible centuries then, one’s prior expectation that one of the 10 billion centuries that civilization will possible live through is 10,000 (aka 1,000,000%). One’s credence in this ought to be 1 (100%) by definition.
My method of determining the prior doesn’t have this problem. On the contrary, as Column J of my linked spreadsheet from the previous comment shows, the prior probability that the Hingiest Century is somewhere in the Century 1-1000 range (which I calculate by summing the individual priors for those thousand centuries) approaches 100% as the probability that civilization goes extinct in those first 1000 centuries approaches 100%.
Yeah, I think I messed up this bit. I should have used the harmonic mean rather than the arithmetic mean when averaging over possibilities of how many people will be in the future. Doing this brings the chance of being among the most influential person ever close to the chance of being the most influential person ever in a small-population universe. But then we get the issue that being the most influential person ever in a small-population universe is much less important than being the most influential person in a big-population universe. And it’s only the latter that we care about.
So what I really should have said (in my too-glib argument) is: for simplicity, just assume a high-population future, which are the action-relevant futures if you’re a longtermist. Then take a uniform prior over all times (or all people) in that high-population future. So my claim is: “In the action-relevant worlds, the frequency of ‘most important time’ (or ‘most important person’) is extremely low, and so should be our prior.”
Thanks for the reply, Will. I go by Will too by the way.
for simplicity, just assume a high-population future, which are the action-relevant futures if you’re a longtermist
This assumption seems dubious to me because it seems to ignore the nontrivial possibility that there is something like a Great Filter in our future that requires direct-work to overcome (or could benefit from direct-work).
That is, maybe if we solve one challenge right in our near-term future right (e.g. hand-off the future to benevolent AGI) then it will be more or less inevitable that life will flourish for billions of years, and if we fail to overcome that challenge then we will go extinct fairly soon. As long as you put a nontrivial probability on such a challenge existing in the short-term future and it being tractable, then even longtermist altruists in the small-population worlds (possibly ours) who try punting to the future / passing the buck instead of doing direct work and thus fail to make it past the Great-Filter-like challenge can (I claim, contrary to you by my understanding) be said to be living in an action-relevant world despite living in a small-population universe. This is because they had the power (even though they didn’t exercise it) to make the future a big-population universe.
Another reason to think that MacAskill’s method of determining the prior is flawed that I forgot to write down:
If one uses the same approach to come up with a prior that the second, third, fourth, X century is the hingiest century of the future, and then adds these priors together one ought to get 100%. This is true because exactly one of the set of all future centuries must be the hingiest century of the future. Yet with MacAskill’s method of determining the priors, when one sums all the individual priors that the hingiest century is century X, one gets a number far greater than 100%. That is, MacAskill’s estimate is that there are 1 million expected centuries ahead, so he uses a prior of 1 in 1 million that the first century is the hingiest (before the arbitrary 10x adjustment). However, his model assumes that it’s possible that civilization could last as long as 10 billion centuries (1 trillion years). So what is his prior that e.g. the 2 billionth century is the hingiest? 1 in 1 million also? Surely this isn’t reasonable, for if one uses a prior of 1 in 1 million for all 10 billion possible centuries then, one’s prior expectation that one of the 10 billion centuries that civilization will possible live through is 10,000 (aka 1,000,000%). One’s credence in this ought to be 1 (100%) by definition.
My method of determining the prior doesn’t have this problem. On the contrary, as Column J of my linked spreadsheet from the previous comment shows, the prior probability that the Hingiest Century is somewhere in the Century 1-1000 range (which I calculate by summing the individual priors for those thousand centuries) approaches 100% as the probability that civilization goes extinct in those first 1000 centuries approaches 100%.
Thanks, William!
Yeah, I think I messed up this bit. I should have used the harmonic mean rather than the arithmetic mean when averaging over possibilities of how many people will be in the future. Doing this brings the chance of being among the most influential person ever close to the chance of being the most influential person ever in a small-population universe. But then we get the issue that being the most influential person ever in a small-population universe is much less important than being the most influential person in a big-population universe. And it’s only the latter that we care about.
So what I really should have said (in my too-glib argument) is: for simplicity, just assume a high-population future, which are the action-relevant futures if you’re a longtermist. Then take a uniform prior over all times (or all people) in that high-population future. So my claim is: “In the action-relevant worlds, the frequency of ‘most important time’ (or ‘most important person’) is extremely low, and so should be our prior.”
Thanks for the reply, Will. I go by Will too by the way.
This assumption seems dubious to me because it seems to ignore the nontrivial possibility that there is something like a Great Filter in our future that requires direct-work to overcome (or could benefit from direct-work).
That is, maybe if we solve one challenge right in our near-term future right (e.g. hand-off the future to benevolent AGI) then it will be more or less inevitable that life will flourish for billions of years, and if we fail to overcome that challenge then we will go extinct fairly soon. As long as you put a nontrivial probability on such a challenge existing in the short-term future and it being tractable, then even longtermist altruists in the small-population worlds (possibly ours) who try punting to the future / passing the buck instead of doing direct work and thus fail to make it past the Great-Filter-like challenge can (I claim, contrary to you by my understanding) be said to be living in an action-relevant world despite living in a small-population universe. This is because they had the power (even though they didn’t exercise it) to make the future a big-population universe.