Thanks for writing this all up! A few small comments:
And, for the Time of Perils view to really support HoH, it’s not quite enough to show that extinction risk is unusually high; what’s needed is that extinction risk mitigation efforts are unusually cost-effective. So part of the view must be not only that extinction risk is unusually high at this time, but also that longtermist altruists are unusually well-placed to decrease those risks — perhaps because extinction risk reduction is unusually neglected.
It could even be the case that extinction risks were unusually low right now, but this period is nonetheless unusually critical because of the tractability. For example, suppose the main risk to mankind was a asteroid or supervolcanos. Prior to the 20th century, there was little we could do about it—and after the 21st century we will have mature space colonies so it will also no longer be an extinction risk. Only in the interim can we do anything to reduce the probability, by researching the threats, attempting to redirect asteroids, accelerating colonization, and so on.
The primary reasons for believing (2) are that if we’re in a simulation it’s much more likely that the future is short, and that extending our future doesn’t change the total amount of lived experiences (because the simulators will just run some other simulation afterwards), and that we’re missing some crucial consideration around how to act.
I know you mention acausal decision theories elsewhere, but I think it is worthwhile bringing them up here. If we are in an ancestor simulation, it is rational for us to try to reduce existential risk, because this decision is acausally entangled with the decision of the ‘original’ people, whose existential risk reduction efforts causally lead to the existence of the simulation.
Similarly, I think your prior over our position needs directly address anthropic Doomsday-type arguments.
In contrast, if you are more sympathetic to moral realism (or a more sophisticated form of subjectivism), as I am, then you’ll probably be more sympathetic to the idea that future people will have a better understanding of what’s of value than you do now, and this gives another reason for passing the baton on to future generations.
I think you might be overstating the case here. Suppose you assigned 20% credence to some sort of subjectivist / lovecraftian parochialism that places a high value on our actual values right now, 50% to meta-ethical moral realism and predicted moral progress in the future, and 30% to other (e.g. moral realism but not moral progress). It seems this would suggest a nearly 20% credence in now being a hinge period. In contrast, according to the moral realist theory, now is not an especially important time. So for moral uncertainty reasons we should act as if now is an unusually important period.
even then you might still want to save money in a Victorian-values foundation to grant out at a later date
I suspect unfortunately the money may end up being essentially stolen and used for other purposes. There are many examples of this—a classic one is the Ford Foundation, which now promotes goals quite different from that which Henry Ford wanted.
I agree with your reasoning concerning uncertainty.
In the arguments against HoH, there’s an appeal to the uncertainty of our evaluations of “Influence”. However, the definition of most influential time depends on an evaluation of the opportunity costs of investing in one time vs. another (such as the short-term vs. the long-term).
Uncertainty is a double-edged sword: I get confused when someone argues for “give later” mostly on the ground of our current uncertainty about impact (actually, uncertainty often induces risk-aversion and presentist bias). Suppose that I currently have a credence 0.7 over the statement “AMF saves at least a life (30 QALY) for every U$3,000”; if I wait ten years, I can hope my confidence on such statements will increase to something like 0.8. However, my confidence in such an increase is just 0.9 – so, when I aggregate all of this uncertainty, it’s almost a draw – 0.72.
(Sorry about using point estimates, but I’m no statistician, and I guess we better keep it simple)
Something similar applies to “start a movement”, and I didn’t even mention cluelessness and value shift.
So, if I donate to a Fund that promises me to invest in the best actions in the long term future, instead of the short-term, I have to trust: a) that the world is not going to end first (so, I have to discount extinction rates); b) the Fund and the underlying financial structure will not end first (or significantly lose its value); c) the Fund will correctly identify a more influential moment, and d) its investment will be aligned with my impartial preferences (as I would decide if I had the same info).
Thanks for writing this all up! A few small comments:
It could even be the case that extinction risks were unusually low right now, but this period is nonetheless unusually critical because of the tractability. For example, suppose the main risk to mankind was a asteroid or supervolcanos. Prior to the 20th century, there was little we could do about it—and after the 21st century we will have mature space colonies so it will also no longer be an extinction risk. Only in the interim can we do anything to reduce the probability, by researching the threats, attempting to redirect asteroids, accelerating colonization, and so on.
I know you mention acausal decision theories elsewhere, but I think it is worthwhile bringing them up here. If we are in an ancestor simulation, it is rational for us to try to reduce existential risk, because this decision is acausally entangled with the decision of the ‘original’ people, whose existential risk reduction efforts causally lead to the existence of the simulation.
Similarly, I think your prior over our position needs directly address anthropic Doomsday-type arguments.
I think you might be overstating the case here. Suppose you assigned 20% credence to some sort of subjectivist / lovecraftian parochialism that places a high value on our actual values right now, 50% to meta-ethical moral realism and predicted moral progress in the future, and 30% to other (e.g. moral realism but not moral progress). It seems this would suggest a nearly 20% credence in now being a hinge period. In contrast, according to the moral realist theory, now is not an especially important time. So for moral uncertainty reasons we should act as if now is an unusually important period.
I suspect unfortunately the money may end up being essentially stolen and used for other purposes. There are many examples of this—a classic one is the Ford Foundation, which now promotes goals quite different from that which Henry Ford wanted.
I agree with your reasoning concerning uncertainty.
In the arguments against HoH, there’s an appeal to the uncertainty of our evaluations of “Influence”. However, the definition of most influential time depends on an evaluation of the opportunity costs of investing in one time vs. another (such as the short-term vs. the long-term).
Uncertainty is a double-edged sword: I get confused when someone argues for “give later” mostly on the ground of our current uncertainty about impact (actually, uncertainty often induces risk-aversion and presentist bias). Suppose that I currently have a credence 0.7 over the statement “AMF saves at least a life (30 QALY) for every U$3,000”; if I wait ten years, I can hope my confidence on such statements will increase to something like 0.8. However, my confidence in such an increase is just 0.9 – so, when I aggregate all of this uncertainty, it’s almost a draw – 0.72.
(Sorry about using point estimates, but I’m no statistician, and I guess we better keep it simple)
Something similar applies to “start a movement”, and I didn’t even mention cluelessness and value shift.
So, if I donate to a Fund that promises me to invest in the best actions in the long term future, instead of the short-term, I have to trust: a) that the world is not going to end first (so, I have to discount extinction rates); b) the Fund and the underlying financial structure will not end first (or significantly lose its value); c) the Fund will correctly identify a more influential moment, and d) its investment will be aligned with my impartial preferences (as I would decide if I had the same info).