When planning how to donate, it seems very important to consider the impact of market returns increasing due to progress in AI. But I think more considerations should be taken into account before drawing the conclusion in the OP.
For each specific cause, we should estimate the curve over time of EV-per-additional-dollar-invested-in-2019-and-used-now (given an estimate of market returns over time). As Richard pointed out, for reducing AI x-risk, it is not obvious we will have time to effectively use the money we invest today if we wait for too long (so “the curve” for AI safety might be sharply decreasing).
Here is another consideration I find relevant for AI x-risk: in slow takeoff worlds more people are likely to become worried about x-risk from AI (e.g. after they see that the economy has doubled in the past 4 years and that lots of weird things are happening). In such worlds, it might be the case that a very small fraction of the money that will be allocated for reducing AI x-risk would be donated by people who are currently worried about AI x-risk. This consideration might make us increase the weight of fast takeoff worlds.
On the other hand, maybe in slow takeoff worlds there is generally a lot more that could be done for reducing x-risk from AI (especially if slow takeoff correlates with longer timelines), which suggests we increase the weight of slow takeoff worlds.
If you think a fast takeoff is more likely, it probably makes more sense to either invest your current capital in tooling up as an AI alignment researcher, or to donate now to your favorite AI alignment organization (Larks’ 2018 review (a) is a good starting point here).
I just wanted to note that some of the research directions for reducing AI x-risk, including ones that seem relevant in fast takeoff worlds, are outside of the technical AI alignment field (for example, governance/policy/strategy research).
When planning how to donate, it seems very important to consider the impact of market returns increasing due to progress in AI. But I think more considerations should be taken into account before drawing the conclusion in the OP.
For each specific cause, we should estimate the curve over time of EV-per-additional-dollar-invested-in-2019-and-used-now (given an estimate of market returns over time). As Richard pointed out, for reducing AI x-risk, it is not obvious we will have time to effectively use the money we invest today if we wait for too long (so “the curve” for AI safety might be sharply decreasing).
Here is another consideration I find relevant for AI x-risk: in slow takeoff worlds more people are likely to become worried about x-risk from AI (e.g. after they see that the economy has doubled in the past 4 years and that lots of weird things are happening). In such worlds, it might be the case that a very small fraction of the money that will be allocated for reducing AI x-risk would be donated by people who are currently worried about AI x-risk. This consideration might make us increase the weight of fast takeoff worlds.
On the other hand, maybe in slow takeoff worlds there is generally a lot more that could be done for reducing x-risk from AI (especially if slow takeoff correlates with longer timelines), which suggests we increase the weight of slow takeoff worlds.
I just wanted to note that some of the research directions for reducing AI x-risk, including ones that seem relevant in fast takeoff worlds, are outside of the technical AI alignment field (for example, governance/policy/strategy research).