I agree with this. I think one important consideration here is who are the agents for which we are doing the prioritization.
If our goal is to start a new charity and we are comparing causes, then all we should care about is the best intervention (we can find) - the one which we will end up implementing. If, in contrast, our goal is to develop a diverse community of people interested in exploring and solving some cause, we might care about a broader range of interventions, as well as potentially some qualities of the problem which help increase overall cohesiveness between the different actors
If, in contrast, our goal is to develop a diverse community of people interested in exploring and solving some cause, we might care about a broader range of interventions
I agree with this.
as well as potentially some qualities of the problem which help increase overall cohesiveness between the different actors
Iâm not sure I understand this. Could you expand?
Sure. So, consider x-risk as an example cause area. It is a pretty broad cause area and contains secondary causes like mitigating AI-risk or Biorisk. Developing this as a common cause area involves advances like understanding what are the different risks, identifying relevant political and legal actions, making a strong ethical case, and gathering broad support.
So even if we think that the best interventions are likely in, say, AI-safety, it might be better to develop a community around a broader cause area. (So, here Iâm thinking of cause area more like that in Givewellâs 2013 definition).
I think one important consideration here is who are the agents for which we are doing the prioritization.
If our goal is to start a new charity and we are comparing causes, then all we should care about is the best intervention (we can find) - the one which we will end up implementing.
This is a good point that I hadnât thought of.
But I slightly disagree with charity example. The main reason is that the intervention thatâs in general best may not be the one thatâs best for whatever audience weâre talking to, due to personal fit. (In both cases, âbestâ should be interpreted as âbest in expectation, on the margin, given our current knowledge and time available for searchingâ, but thatâs irrelevant to the point I want to make.)
This is most obvious if weâre planning to ourselves run the charity. Itâs less obvious if weâre doing something more like what Charity Entrepreneurship does, where weâll ultimately seek out people from a large pool, since then we can seek people out partly based on personal fit for our charity idea. But:
our pool may still tend to be stronger in some areas than others, as is the case with EAs
if we have to optimise strongly for personal fit, we might have to sacrifice some degree of general competence/âcareer capital/âwhatever, such that ultimately more good wouldâve been done by a different founder running a charity thatâs focused on an intervention thatâd be less good in general (ignoring personal fit)
A smaller reason why I disagree is that, even if our primary goal is to start a new charity, it may be the case that a non-negligible fraction of the impact of our research comes from other effects (e.g., informing donors, researchers, people deciding on careers unrelated to charity entrepreneurship). This seems to be the case for Charity Entrepreneurship, and analogous things seem to be the case for 80,000 Hours, GiveWell, etc. But this point feels more like a nit-pick.
In any case, as mentioned, I do think that your point is a good one, and I think I only slightly disagree :)
I agree with this. I think one important consideration here is who are the agents for which we are doing the prioritization.
If our goal is to start a new charity and we are comparing causes, then all we should care about is the best intervention (we can find) - the one which we will end up implementing. If, in contrast, our goal is to develop a diverse community of people interested in exploring and solving some cause, we might care about a broader range of interventions, as well as potentially some qualities of the problem which help increase overall cohesiveness between the different actors
I agree with this.
Iâm not sure I understand this. Could you expand?
Sure. So, consider x-risk as an example cause area. It is a pretty broad cause area and contains secondary causes like mitigating AI-risk or Biorisk. Developing this as a common cause area involves advances like understanding what are the different risks, identifying relevant political and legal actions, making a strong ethical case, and gathering broad support.
So even if we think that the best interventions are likely in, say, AI-safety, it might be better to develop a community around a broader cause area. (So, here Iâm thinking of cause area more like that in Givewellâs 2013 definition).
This is a good point that I hadnât thought of.
But I slightly disagree with charity example. The main reason is that the intervention thatâs in general best may not be the one thatâs best for whatever audience weâre talking to, due to personal fit. (In both cases, âbestâ should be interpreted as âbest in expectation, on the margin, given our current knowledge and time available for searchingâ, but thatâs irrelevant to the point I want to make.)
This is most obvious if weâre planning to ourselves run the charity. Itâs less obvious if weâre doing something more like what Charity Entrepreneurship does, where weâll ultimately seek out people from a large pool, since then we can seek people out partly based on personal fit for our charity idea. But:
our pool may still tend to be stronger in some areas than others, as is the case with EAs
if we have to optimise strongly for personal fit, we might have to sacrifice some degree of general competence/âcareer capital/âwhatever, such that ultimately more good wouldâve been done by a different founder running a charity thatâs focused on an intervention thatâd be less good in general (ignoring personal fit)
A smaller reason why I disagree is that, even if our primary goal is to start a new charity, it may be the case that a non-negligible fraction of the impact of our research comes from other effects (e.g., informing donors, researchers, people deciding on careers unrelated to charity entrepreneurship). This seems to be the case for Charity Entrepreneurship, and analogous things seem to be the case for 80,000 Hours, GiveWell, etc. But this point feels more like a nit-pick.
In any case, as mentioned, I do think that your point is a good one, and I think I only slightly disagree :)