A few of these reasons do suggest that it might be useful to make grants in a cause area to stay open to it/keep actively researching it/keep potential grantees aware that you’re funding it. This would suggest that it’s worthwhile to spend relatively small amounts of money on less promising cause areas, but maintain spending to keep momentum.
This does have downsides:
It costs money. If you can afford to spend $200 million/year, and you want to spend $5 million/year on each suboptimal cause area, that would easily eat up a quarter to a half of your budget.
It costs staff time. You have limited capacity to do research and talk to grantees, so any time spent doing this in a suboptimal cause area is time spent not doing it in an optimal cause area. Maybe you could resolve this by putting only passing investment into less important areas and making grants without investigating them much.
Making grants in secondary cause areas has benefits, but the question is, does it have sufficient benefits to make it better than spending those grants on the strongest cause area(s)?
It’s easier to spread the ideas behind effective altruism [...] if there is a prominent foundation which is known for the methodology that it uses to choose causes rather than for its support of particular causes.
Aside from the fact that I’m skeptical of this claim, Open Phil is fairly opaque about how it makes grant decisions. It produces writeups about the pros and cons of cause areas/grants, which is nice, but that doesn’t tell us why this grants was chosen rather than some other grant, or why Open Phil has chosen to prioritize one cause area over another.
And like I said, I’m skeptical of this claim. Perhaps making grants to lots of cause areas promotes EA ideas. But since the standard EA claim is that individual donors should give to the single best cause, maybe a foundation would better promote EA ideas by focusing on the single best area until it has enough funding that it’s no longer best on margin. I don’t really know either way and I don’t know how one would know.
I’m also not convinced that promoting EA ideas is a good thing.
My intuition is that you might be overestimating how much information is available to donors? There is also uncertainty over the value of purchasing additional information. It seems you need to buy at least a little bit of information in the best way you know how in order to start to calibrate how valuable that info is and thus your future information purchases will be.
Getting information is definitely important in a lot of cases. I believe it’s more important for narrow decisions (e.g. which interventions to support within a cause) than broad decisions (such as whether to prioritize short-term or far-future interventions). I don’t believe there’s much you could learn from making grants about how to prioritize short-term versus far-future interventions, since this depends mostly on theoretical questions and extremely long-term effects that you can’t really measure.
since this depends mostly on theoretical questions and extremely long-term effects that you can’t really measure.
This itself is the sort of hypothesis that we wish to test by doing additional research. What sort of actions, if any, have ever had predictable long term consequences? What is the actual time horizon of e.g. qualitative predictions (unknown) vs quantitative predictions (around 400 days according to superforecasting work so far).
A few of these reasons do suggest that it might be useful to make grants in a cause area to stay open to it/keep actively researching it/keep potential grantees aware that you’re funding it. This would suggest that it’s worthwhile to spend relatively small amounts of money on less promising cause areas, but maintain spending to keep momentum.
This does have downsides:
It costs money. If you can afford to spend $200 million/year, and you want to spend $5 million/year on each suboptimal cause area, that would easily eat up a quarter to a half of your budget.
It costs staff time. You have limited capacity to do research and talk to grantees, so any time spent doing this in a suboptimal cause area is time spent not doing it in an optimal cause area. Maybe you could resolve this by putting only passing investment into less important areas and making grants without investigating them much.
Making grants in secondary cause areas has benefits, but the question is, does it have sufficient benefits to make it better than spending those grants on the strongest cause area(s)?
Aside from the fact that I’m skeptical of this claim, Open Phil is fairly opaque about how it makes grant decisions. It produces writeups about the pros and cons of cause areas/grants, which is nice, but that doesn’t tell us why this grants was chosen rather than some other grant, or why Open Phil has chosen to prioritize one cause area over another.
And like I said, I’m skeptical of this claim. Perhaps making grants to lots of cause areas promotes EA ideas. But since the standard EA claim is that individual donors should give to the single best cause, maybe a foundation would better promote EA ideas by focusing on the single best area until it has enough funding that it’s no longer best on margin. I don’t really know either way and I don’t know how one would know.
I’m also not convinced that promoting EA ideas is a good thing.
My intuition is that you might be overestimating how much information is available to donors? There is also uncertainty over the value of purchasing additional information. It seems you need to buy at least a little bit of information in the best way you know how in order to start to calibrate how valuable that info is and thus your future information purchases will be.
Getting information is definitely important in a lot of cases. I believe it’s more important for narrow decisions (e.g. which interventions to support within a cause) than broad decisions (such as whether to prioritize short-term or far-future interventions). I don’t believe there’s much you could learn from making grants about how to prioritize short-term versus far-future interventions, since this depends mostly on theoretical questions and extremely long-term effects that you can’t really measure.
This itself is the sort of hypothesis that we wish to test by doing additional research. What sort of actions, if any, have ever had predictable long term consequences? What is the actual time horizon of e.g. qualitative predictions (unknown) vs quantitative predictions (around 400 days according to superforecasting work so far).