Using a distribution over possible futures seems important. The specific method you propose seems useful for getting a better picture of maxi{P(century i most leveraged)}. However, what we want in order to make decisions is something more akin to maxi{E[leverage of century i]}. The most obvious difference is that scenarios in which the future is short and there is little one can do about it score highly on expected ranking and low on expected value. I am unclear on whether a flat prior makes sense for expectancy, but it seems more reasonable than for probability.
Of course, even maxi{E[leverage of century i]} does not accurately reflect what we are looking for. Similarly to Gregory_Lewis’ comment, the decision-relevant thing (if ‘punting to the future’ is possible at all) is closer still to maxi{E[what we will assess the leverage of century i to be at the time]}. i.e. whether we will have higher expected leverage in some future century according to our beliefs at that time. Thinking this through, I also find it plausible that even this does not make sense when using the definitions in the post, and will make a related top-level comment.
While I agree with you that maxi(P(century i most leveraged)) is not that action relevant, it is what Will is analyzing in the post, and think that William Kiely’s suggested prior seems basically reasonable for answering that question. As Will said explicitly in another comment:
Agree that it might well be that even though one has a very low credence in HoH, one should still act in the same way. (e.g. because if one is not at HoH, one is a sim, and your actions don’t have much impact).
I do think that the focus on maxi(P(century i most leveraged)) is the part of the post that I am least satisfied by, and that makes it hardest to engage with it, since I don’t really know why we care about the question of “are we in the most influential time in history?”. What we actually care about is the effectiveness of our interventions to give resources to the future, and the marginal effectiveness of those resources in the future, both of which are quite far removed from that question (because of the difficulties of sending resources to the future, and the fact that the answer to that question makes overall only a small difference for the total magnitude of the impact of any individual’s actions).
I agree that, among other things, discussion of mechanisms for sending resources to the future would needed to make such a decision. I figured that all these other considerations were deliberately excluded from this post to keep its scope manageable.
However, I do think that one can interpret the post as making claims about a more insightful kind of probability: the odds with which the current century is the one which will have the highest leverage-evaluated-at-the-time (in contrast to an omniscient view / end-of-time evaluation, which is what this thread mostly focuses on). I think that William_MacAskill’s main arguments are broadly compatible with both of these concepts, so one could get more out of the piece by interpreting it as about the more useful concept.
Formally, one could see the thing being analysed as
P(i=0 maximises E[leverage of century i∣Fi]),
where Fi is the knowledge available at the beginning of century i. If we and all future generations may freely move resources across time, and some things that are maybe omitted from the leverage definition are held constant, this expression tells us with what odds we are correct to do ‘direct work’ today as opposed to transfer resources one century forward. (Confusion about what ‘direct work’ means noted here.)
However, you seem to be right that as soon as you don’t hold other very important factors (such as how well one can send resources to the future) constant, those additional terms go inside the maximisation evaluation, and hence the above expression still isn’t that useful. (In particular, it can’t just be multiplied by an independent factor to get to a useable expression.)
(Also, I feel like I’m mathing from the hip here, so quite possibly I’ve got this quite wrong.)
Using a distribution over possible futures seems important. The specific method you propose seems useful for getting a better picture of maxi{P(century i most leveraged)}. However, what we want in order to make decisions is something more akin to maxi{E[leverage of century i]}. The most obvious difference is that scenarios in which the future is short and there is little one can do about it score highly on expected ranking and low on expected value. I am unclear on whether a flat prior makes sense for expectancy, but it seems more reasonable than for probability.
Of course, even maxi{E[leverage of century i]} does not accurately reflect what we are looking for. Similarly to Gregory_Lewis’ comment, the decision-relevant thing (if ‘punting to the future’ is possible at all) is closer still to maxi{E[what we will assess the leverage of century i to be at the time]}. i.e. whether we will have higher expected leverage in some future century according to our beliefs at that time. Thinking this through, I also find it plausible that even this does not make sense when using the definitions in the post, and will make a related top-level comment.
While I agree with you that maxi(P(century i most leveraged)) is not that action relevant, it is what Will is analyzing in the post, and think that William Kiely’s suggested prior seems basically reasonable for answering that question. As Will said explicitly in another comment:
I do think that the focus on maxi(P(century i most leveraged)) is the part of the post that I am least satisfied by, and that makes it hardest to engage with it, since I don’t really know why we care about the question of “are we in the most influential time in history?”. What we actually care about is the effectiveness of our interventions to give resources to the future, and the marginal effectiveness of those resources in the future, both of which are quite far removed from that question (because of the difficulties of sending resources to the future, and the fact that the answer to that question makes overall only a small difference for the total magnitude of the impact of any individual’s actions).
I agree that, among other things, discussion of mechanisms for sending resources to the future would needed to make such a decision. I figured that all these other considerations were deliberately excluded from this post to keep its scope manageable.
However, I do think that one can interpret the post as making claims about a more insightful kind of probability: the odds with which the current century is the one which will have the highest leverage-evaluated-at-the-time (in contrast to an omniscient view / end-of-time evaluation, which is what this thread mostly focuses on). I think that William_MacAskill’s main arguments are broadly compatible with both of these concepts, so one could get more out of the piece by interpreting it as about the more useful concept.
Formally, one could see the thing being analysed as
P(i=0 maximises E[leverage of century i∣Fi]),
where Fi is the knowledge available at the beginning of century i. If we and all future generations may freely move resources across time, and some things that are maybe omitted from the leverage definition are held constant, this expression tells us with what odds we are correct to do ‘direct work’ today as opposed to transfer resources one century forward. (Confusion about what ‘direct work’ means noted here.)
However, you seem to be right that as soon as you don’t hold other very important factors (such as how well one can send resources to the future) constant, those additional terms go inside the maximisation evaluation, and hence the above expression still isn’t that useful. (In particular, it can’t just be multiplied by an independent factor to get to a useable expression.)
(Also, I feel like I’m mathing from the hip here, so quite possibly I’ve got this quite wrong.)