Aren’t there interventions that could be considered (with relatively high probability) robustly positive with regards to the long term future? Somewhat more abstract things such as “increasing empathy” or “improving human rationality” come to mind, but I guess one could argue how they could have a negative impact on the future in some plausible way. Another one certainly is “reduce existencial risks”—unless you weigh suffering risks so heavily that it’s unclear whether preventing existential risk is good or bad in the first place.
Regarding such causes—given we can identify robust ones—it then may still be valuable to analyze cost-effectiveness, as there would likely be a (high?) correlation between cost-effectiveness and positive impact on the future.
If you were to agree with that, then maybe we could reframe your argument from “cost-effectiveness may be of low value” to “cause areas outside of far future considerations are overrated (and hence their cost-effectiveness is measured in a way that is of little use)” or something like that.
Aren’t there interventions that could be considered (with relatively high probability) robustly positive with regards to the long term future?
I agree that interventions like this exist, and I think we identify them by making theoretical cases for & against.
Regarding such causes—given we can identify robust ones—it then may still be valuable to analyze cost-effectiveness
As above, I think cost-effectiveness can useful for determining which intervention to focus on within a specific domain (e.g. “which intervention most increases empathy?” could benefit from a cost-effect analysis).
But for questions about which domain to focus on, I don’t think cost-effectiveness gives much lift (e.g. “is it better to focus on increasing empathy or improving nuclear security?” is the kind of question that seems intractable to cost-effect analysis).
Aren’t there interventions that could be considered (with relatively high probability) robustly positive with regards to the long term future? Somewhat more abstract things such as “increasing empathy” or “improving human rationality” come to mind, but I guess one could argue how they could have a negative impact on the future in some plausible way. Another one certainly is “reduce existencial risks”—unless you weigh suffering risks so heavily that it’s unclear whether preventing existential risk is good or bad in the first place.
Regarding such causes—given we can identify robust ones—it then may still be valuable to analyze cost-effectiveness, as there would likely be a (high?) correlation between cost-effectiveness and positive impact on the future.
If you were to agree with that, then maybe we could reframe your argument from “cost-effectiveness may be of low value” to “cause areas outside of far future considerations are overrated (and hence their cost-effectiveness is measured in a way that is of little use)” or something like that.
I agree that interventions like this exist, and I think we identify them by making theoretical cases for & against.
As above, I think cost-effectiveness can useful for determining which intervention to focus on within a specific domain (e.g. “which intervention most increases empathy?” could benefit from a cost-effect analysis).
But for questions about which domain to focus on, I don’t think cost-effectiveness gives much lift (e.g. “is it better to focus on increasing empathy or improving nuclear security?” is the kind of question that seems intractable to cost-effect analysis).