Yeah, I think I messed up this bit. I should have used the harmonic mean rather than the arithmetic mean when averaging over possibilities of how many people will be in the future. Doing this brings the chance of being among the most influential person ever close to the chance of being the most influential person ever in a small-population universe. But then we get the issue that being the most influential person ever in a small-population universe is much less important than being the most influential person in a big-population universe. And it’s only the latter that we care about.
So what I really should have said (in my too-glib argument) is: for simplicity, just assume a high-population future, which are the action-relevant futures if you’re a longtermist. Then take a uniform prior over all times (or all people) in that high-population future. So my claim is: “In the action-relevant worlds, the frequency of ‘most important time’ (or ‘most important person’) is extremely low, and so should be our prior.”
Thanks for the reply, Will. I go by Will too by the way.
for simplicity, just assume a high-population future, which are the action-relevant futures if you’re a longtermist
This assumption seems dubious to me because it seems to ignore the nontrivial possibility that there is something like a Great Filter in our future that requires direct-work to overcome (or could benefit from direct-work).
That is, maybe if we solve one challenge right in our near-term future right (e.g. hand-off the future to benevolent AGI) then it will be more or less inevitable that life will flourish for billions of years, and if we fail to overcome that challenge then we will go extinct fairly soon. As long as you put a nontrivial probability on such a challenge existing in the short-term future and it being tractable, then even longtermist altruists in the small-population worlds (possibly ours) who try punting to the future / passing the buck instead of doing direct work and thus fail to make it past the Great-Filter-like challenge can (I claim, contrary to you by my understanding) be said to be living in an action-relevant world despite living in a small-population universe. This is because they had the power (even though they didn’t exercise it) to make the future a big-population universe.
Thanks, William!
Yeah, I think I messed up this bit. I should have used the harmonic mean rather than the arithmetic mean when averaging over possibilities of how many people will be in the future. Doing this brings the chance of being among the most influential person ever close to the chance of being the most influential person ever in a small-population universe. But then we get the issue that being the most influential person ever in a small-population universe is much less important than being the most influential person in a big-population universe. And it’s only the latter that we care about.
So what I really should have said (in my too-glib argument) is: for simplicity, just assume a high-population future, which are the action-relevant futures if you’re a longtermist. Then take a uniform prior over all times (or all people) in that high-population future. So my claim is: “In the action-relevant worlds, the frequency of ‘most important time’ (or ‘most important person’) is extremely low, and so should be our prior.”
Thanks for the reply, Will. I go by Will too by the way.
This assumption seems dubious to me because it seems to ignore the nontrivial possibility that there is something like a Great Filter in our future that requires direct-work to overcome (or could benefit from direct-work).
That is, maybe if we solve one challenge right in our near-term future right (e.g. hand-off the future to benevolent AGI) then it will be more or less inevitable that life will flourish for billions of years, and if we fail to overcome that challenge then we will go extinct fairly soon. As long as you put a nontrivial probability on such a challenge existing in the short-term future and it being tractable, then even longtermist altruists in the small-population worlds (possibly ours) who try punting to the future / passing the buck instead of doing direct work and thus fail to make it past the Great-Filter-like challenge can (I claim, contrary to you by my understanding) be said to be living in an action-relevant world despite living in a small-population universe. This is because they had the power (even though they didn’t exercise it) to make the future a big-population universe.