Hmm. It seems like the only way this differs from my account is that ‘cause comparisons’ are/should be the comparison of the top interventions, rather than just intervention. But the ‘cause comparison’ is still impossible without (implicitly) evaluating the specific things you can do.
Yes, I think that sounds correct to me. I think that that’s what I was trying to get across with “But I think the reasoning given in this post still isn’t quite right, becauseI don’t think we only care about the best interventions in each area; I think we also care about other identifiable positive outliers.”
I.e., I do think that, other that that point, I agree with your discussion of what the question “How promising is a given cause area X rather than a cause area Y?” should be interpreted and roughly how it should be tackled.
I agree with this. I think one important consideration here is who are the agents for which we are doing the prioritization.
If our goal is to start a new charity and we are comparing causes, then all we should care about is the best intervention (we can find) - the one which we will end up implementing. If, in contrast, our goal is to develop a diverse community of people interested in exploring and solving some cause, we might care about a broader range of interventions, as well as potentially some qualities of the problem which help increase overall cohesiveness between the different actors
If, in contrast, our goal is to develop a diverse community of people interested in exploring and solving some cause, we might care about a broader range of interventions
I agree with this.
as well as potentially some qualities of the problem which help increase overall cohesiveness between the different actors
Sure. So, consider x-risk as an example cause area. It is a pretty broad cause area and contains secondary causes like mitigating AI-risk or Biorisk. Developing this as a common cause area involves advances like understanding what are the different risks, identifying relevant political and legal actions, making a strong ethical case, and gathering broad support.
So even if we think that the best interventions are likely in, say, AI-safety, it might be better to develop a community around a broader cause area. (So, here I’m thinking of cause area more like that in Givewell’s 2013 definition).
I think one important consideration here is who are the agents for which we are doing the prioritization.
If our goal is to start a new charity and we are comparing causes, then all we should care about is the best intervention (we can find) - the one which we will end up implementing.
This is a good point that I hadn’t thought of.
But I slightly disagree with charity example. The main reason is that the intervention that’s in general best may not be the one that’s best for whatever audience we’re talking to, due to personal fit. (In both cases, “best” should be interpreted as “best in expectation, on the margin, given our current knowledge and time available for searching”, but that’s irrelevant to the point I want to make.)
This is most obvious if we’re planning to ourselves run the charity. It’s less obvious if we’re doing something more like what Charity Entrepreneurship does, where we’ll ultimately seek out people from a large pool, since then we can seek people out partly based on personal fit for our charity idea. But:
our pool may still tend to be stronger in some areas than others, as is the case with EAs
if we have to optimise strongly for personal fit, we might have to sacrifice some degree of general competence/career capital/whatever, such that ultimately more good would’ve been done by a different founder running a charity that’s focused on an intervention that’d be less good in general (ignoring personal fit)
A smaller reason why I disagree is that, even if our primary goal is to start a new charity, it may be the case that a non-negligible fraction of the impact of our research comes from other effects (e.g., informing donors, researchers, people deciding on careers unrelated to charity entrepreneurship). This seems to be the case for Charity Entrepreneurship, and analogous things seem to be the case for 80,000 Hours, GiveWell, etc. But this point feels more like a nit-pick.
In any case, as mentioned, I do think that your point is a good one, and I think I only slightly disagree :)
Hmm. It seems like the only way this differs from my account is that ‘cause comparisons’ are/should be the comparison of the top interventions, rather than just intervention. But the ‘cause comparison’ is still impossible without (implicitly) evaluating the specific things you can do.
Yes, I think that sounds correct to me. I think that that’s what I was trying to get across with “But I think the reasoning given in this post still isn’t quite right, because I don’t think we only care about the best interventions in each area; I think we also care about other identifiable positive outliers.”
I.e., I do think that, other that that point, I agree with your discussion of what the question “How promising is a given cause area X rather than a cause area Y?” should be interpreted and roughly how it should be tackled.
I agree with this. I think one important consideration here is who are the agents for which we are doing the prioritization.
If our goal is to start a new charity and we are comparing causes, then all we should care about is the best intervention (we can find) - the one which we will end up implementing. If, in contrast, our goal is to develop a diverse community of people interested in exploring and solving some cause, we might care about a broader range of interventions, as well as potentially some qualities of the problem which help increase overall cohesiveness between the different actors
I agree with this.
I’m not sure I understand this. Could you expand?
Sure. So, consider x-risk as an example cause area. It is a pretty broad cause area and contains secondary causes like mitigating AI-risk or Biorisk. Developing this as a common cause area involves advances like understanding what are the different risks, identifying relevant political and legal actions, making a strong ethical case, and gathering broad support.
So even if we think that the best interventions are likely in, say, AI-safety, it might be better to develop a community around a broader cause area. (So, here I’m thinking of cause area more like that in Givewell’s 2013 definition).
This is a good point that I hadn’t thought of.
But I slightly disagree with charity example. The main reason is that the intervention that’s in general best may not be the one that’s best for whatever audience we’re talking to, due to personal fit. (In both cases, “best” should be interpreted as “best in expectation, on the margin, given our current knowledge and time available for searching”, but that’s irrelevant to the point I want to make.)
This is most obvious if we’re planning to ourselves run the charity. It’s less obvious if we’re doing something more like what Charity Entrepreneurship does, where we’ll ultimately seek out people from a large pool, since then we can seek people out partly based on personal fit for our charity idea. But:
our pool may still tend to be stronger in some areas than others, as is the case with EAs
if we have to optimise strongly for personal fit, we might have to sacrifice some degree of general competence/career capital/whatever, such that ultimately more good would’ve been done by a different founder running a charity that’s focused on an intervention that’d be less good in general (ignoring personal fit)
A smaller reason why I disagree is that, even if our primary goal is to start a new charity, it may be the case that a non-negligible fraction of the impact of our research comes from other effects (e.g., informing donors, researchers, people deciding on careers unrelated to charity entrepreneurship). This seems to be the case for Charity Entrepreneurship, and analogous things seem to be the case for 80,000 Hours, GiveWell, etc. But this point feels more like a nit-pick.
In any case, as mentioned, I do think that your point is a good one, and I think I only slightly disagree :)