Iâll answer what I see as the core of your questions before providing some quick responses to each individually.
As you suggest, our approach is very similar to Open Philanthropyâs worldview diversification. One way of looking at it is, we want to provide donation recommendations that maximise cost-effectiveness from the perspective of a particular worldview. We think it makes sense to add another constraint to this, which is that we prioritise providing advice to more plausible worldviews that are consistent with our approach (i.e., focusing on outcomes, having a degree of impartiality, and wanting to rely on evidence and reason).
Iâll share how this works with an example. The âglobal health and wellbeingâ cause area contains recommendations that appeal to people with (some combination) of the following beliefs:
We should prioritise helping people over animals
Some scepticism about highly theoretical theories of change, and a preference for donating to charities whose impact is supported by evidence
Itâs very valuable to save a life
Itâs very valuable to improve someoneâs incomes
People may donate to the cause area without all of these beliefs, or with some combination, or perhaps with none of them but with another motivation not included. Perhaps they have more granular beliefs on top of these, which means they might only be interested in a subset of the fund (e.g., focusing on charities that improve lives rather than save them).
Many of your questions seem to be suggesting that, when we account for consumption of animal products, (3) and (4) are not so plausible. I suspect that this is among the strongest critiques for worldviews that would support GHW. I have my own views about it (as would my colleagues), but from a âGWWCâ perspective, we donât feel confident enough in this argument to use it as a basis to not support this kind of work. In other words, we think the worldviews that would want to give to GHW are sufficiently plausible.
I acknowledge thereâs a question-begging element to this response: I take it your point is why is it sufficiently plausible, and who decides this? Unfortunately, we can acknowledge that we donât have a strong justification here. Itâs a subjective judgement formed by the research team, informed by existing cause prioritisation work from other organisations. We donât feel well-placed to do this work directly (for much the same reason as we need to evaluate evaluators rather than doing charity evaluation ourselves). We would be open to investigating these questions further by speaking with organisations engaging in this cause-prioritisation â weâd love to have a more thoughtful and justified approach to cause prioritisation. In other words, I think youâre pushing on the right place (and hence this answer isnât particularly satisfying).
More generally, weâre all too aware that there are only two of us working directly to decide our recommendation and are reluctant to use our own personal worldviews in highly contested areas to determine our recommendations. Of course, it has to happen to some degree (and we aim to be transparent about it). For example, if I were to donate today, I would likely give 100% of my donations to our Risks and Resilience Fund. I have my reasons, and think Iâm making the right decision according to my own views, but Iâm aware others would disagree with me, and in my role I need to make decisions about our recommendations through the lens of commonly held worldviews I disagree with.
Iâll now go through your questions individually:
If someone wanting to donate 1 M$ who was not pre-commited to any particular area asked for your advise on which of your recommended funds is more cost-effective, and wanted to completely defer to you without engaging in the decision process, what would you say?
Weâd likely suggest donating to our cause area funds, via the âall cause bundleâ, splitting their allocations equally between the three areas. This is our default âset-and-forgetâ option, that seems compelling from a perspective of wanting to give a fraction of oneâs giving to causes that are maximally effective from particular worldviews. This is not the optimal allocation of moral uncertainty (on this approach, the different worldviews could âtradeâ and increase their combined impact); we havenât prioritised trying to find such an optimised portfolio for this purpose. Itâd be an interesting project, and would encourage anyone to do this and share it on the Forum and with us!
Are you confident that donating to the Animal Welfare Fund (AWF) is less than 10 times as cost-effective as donating to GiveWellâs Top Charities Fund (TCF)? If not, have you considered investigating this?
We are not confident. This is going to depend on how you value animals compared to humans; weâre also not sure exactly how cost-effective the AWF Fund is (just that it is the best option we know of in a cause area we think is generally important, tractable and neglected).
If you thought donating to the AWF was over 10 times as cost-effective as donating to TCF (you may actually agree/âdisagree with this), would you still recommend the latter (relatedly)? If so, would you disclaim that your best guess was that AWF was significantly more cost-effective than TCF?
If we thought there wasnât a sufficiently plausible worldview whereby TCF was the best option we knew of, we would not recommend it.
Are you confident that donating to TCF is beneficial accounting for effects on animals? If not, have you considered investigating this? I did not find âanimalâ nor âmeatâ in your evaluation of GiveWell.
We did not consider this, and so do not have a considered answer. I think this would be something we would be interested in considering in our next investigation.
If you thought donating to TCF resulted in a net decrease in welfare due to the meat-eater problem (you may actually agree/âdisagree with this), would you still recommend it? If so, would you disclaim that your best guess was that TCF resulted in a net decrease in welfare, but that you recommended it for other reasons?
As above, we would if we didnât think there was a sufficiently strong worldview by which TCF was the best option we knew of. This could be because of a combination of the meat eater problem, and that we think itâs just not plausible to discount animals. Itâs an interesting question, but itâs also one where Iâm not sure our comparative advantage is coming to a view on it (though perhaps, just as we did with the view that GW should focus on economic progress, we could still discuss it in our evaluation).
Thanks for the thoughtful reply, and being transparent about your approach, Michael! Strongly upvoted.
To better reflect how your different recommendations are linked to particular worldviews, I think it would be good to change the name of your area/âfund âglobal health and wellbeingâ to âglobal human health and wellbeingâ (I would also drop âhealthâ, as it is included in âwellbeingâ). Another reason for this is that Open Philâs area âglobal health and wellbeingâ encompasses both human and animal welfare.
We did not consider this [GiveWellâs top charities effects on animals], and so do not have a considered answer. I think this would be something we would be interested in considering in our next investigation.
I think it would be great if you looked into this at least a little.
I acknowledge thereâs a question-begging element to this response: I take it your point is why is it sufficiently plausible, and who decides this? Unfortunately, we can acknowledge that we donât have a strong justification here. Itâs a subjective judgement formed by the research team, informed by existing cause prioritisation work from other organisations.
I think it makes sense GWWCâs recommendations are informed by the research team. However, I wonder how much of your and Sjirâs views are being driven by path dependence. GWWCâs pledge donations from 2020 to 2022 towards improving human wellbeing were 9.29 (= 0.65/â0.07) times those towards improving animal welfare. Given this, I worry you may hesitate to recommend interventions in animal welfare over human welfare even if you found it much more plausible that both areas should be assessed under the same (impartial welfarist) worldview. This might be a common issue across evaluators. Maybe some popular evaluators realised at some point that rating charities by overhead was not a plausible worldview, but meanwhile they had built a reputation for assessing them along that metric, and had influenced significant donations based on such rankings, so they continued to produce them. I hope GWWC remains attentive to this.
To better reflect how your different recommendations are linked to particular worldviews, I think it would be good to change the name of your area/âfund âglobal health and wellbeingâ to âglobal human health and wellbeingâ
We considered a wide variety of names, and after some deliberation (and a survey or two), we landed on âglobal health and wellbeingâ because we think it is a good balance of accurate and compelling. I agree with some the limitations you outlined, and like your alternative suggestion, especially from a âResearcherâsâ point of view where Iâm very focused on. Iâll share this with the team, but I expect that there would be too much cost switch at this point.
However, I wonder how much of your and Sjirâs views are being driven by path dependence. [...] Given this, I worry you may hesitate to recommend interventions in animal welfare over human welfare even if you found it much more plausible that both areas should be assessed under the same (impartial welfarist) worldview.
Itâs a bit tricky to respond to this having not (at least yet) done an analysis comparing animal versus human interventions. But for if/âwhen we do, I agree it would be important to be aware of the incentives you mentioned, and to avoid making decisions based on path dependencies rather than high quality research. More generally, a good part of our motivation for this project was to help create better incentives for the effective giving ecosystem. So weâd see coming to difficult decisions on cause-prioritisation, if we thought they were justified, as very much within the scope of our work and a way it could add value.
More generally, a good part of our motivation for this project was to help create better incentives for the effective giving ecosystem. So weâd see coming to difficult decisions on cause-prioritisation, if we thought they were justified, as very much within the scope of our work and a way it could add value.
Hi Vasco, thanks for your questions!
Iâll answer what I see as the core of your questions before providing some quick responses to each individually.
As you suggest, our approach is very similar to Open Philanthropyâs worldview diversification. One way of looking at it is, we want to provide donation recommendations that maximise cost-effectiveness from the perspective of a particular worldview. We think it makes sense to add another constraint to this, which is that we prioritise providing advice to more plausible worldviews that are consistent with our approach (i.e., focusing on outcomes, having a degree of impartiality, and wanting to rely on evidence and reason).
Iâll share how this works with an example. The âglobal health and wellbeingâ cause area contains recommendations that appeal to people with (some combination) of the following beliefs:
We should prioritise helping people over animals
Some scepticism about highly theoretical theories of change, and a preference for donating to charities whose impact is supported by evidence
Itâs very valuable to save a life
Itâs very valuable to improve someoneâs incomes
People may donate to the cause area without all of these beliefs, or with some combination, or perhaps with none of them but with another motivation not included. Perhaps they have more granular beliefs on top of these, which means they might only be interested in a subset of the fund (e.g., focusing on charities that improve lives rather than save them).
Many of your questions seem to be suggesting that, when we account for consumption of animal products, (3) and (4) are not so plausible. I suspect that this is among the strongest critiques for worldviews that would support GHW. I have my own views about it (as would my colleagues), but from a âGWWCâ perspective, we donât feel confident enough in this argument to use it as a basis to not support this kind of work. In other words, we think the worldviews that would want to give to GHW are sufficiently plausible.
I acknowledge thereâs a question-begging element to this response: I take it your point is why is it sufficiently plausible, and who decides this? Unfortunately, we can acknowledge that we donât have a strong justification here. Itâs a subjective judgement formed by the research team, informed by existing cause prioritisation work from other organisations. We donât feel well-placed to do this work directly (for much the same reason as we need to evaluate evaluators rather than doing charity evaluation ourselves). We would be open to investigating these questions further by speaking with organisations engaging in this cause-prioritisation â weâd love to have a more thoughtful and justified approach to cause prioritisation. In other words, I think youâre pushing on the right place (and hence this answer isnât particularly satisfying).
More generally, weâre all too aware that there are only two of us working directly to decide our recommendation and are reluctant to use our own personal worldviews in highly contested areas to determine our recommendations. Of course, it has to happen to some degree (and we aim to be transparent about it). For example, if I were to donate today, I would likely give 100% of my donations to our Risks and Resilience Fund. I have my reasons, and think Iâm making the right decision according to my own views, but Iâm aware others would disagree with me, and in my role I need to make decisions about our recommendations through the lens of commonly held worldviews I disagree with.
Iâll now go through your questions individually:
Weâd likely suggest donating to our cause area funds, via the âall cause bundleâ, splitting their allocations equally between the three areas. This is our default âset-and-forgetâ option, that seems compelling from a perspective of wanting to give a fraction of oneâs giving to causes that are maximally effective from particular worldviews. This is not the optimal allocation of moral uncertainty (on this approach, the different worldviews could âtradeâ and increase their combined impact); we havenât prioritised trying to find such an optimised portfolio for this purpose. Itâd be an interesting project, and would encourage anyone to do this and share it on the Forum and with us!
We are not confident. This is going to depend on how you value animals compared to humans; weâre also not sure exactly how cost-effective the AWF Fund is (just that it is the best option we know of in a cause area we think is generally important, tractable and neglected).
If we thought there wasnât a sufficiently plausible worldview whereby TCF was the best option we knew of, we would not recommend it.
We did not consider this, and so do not have a considered answer. I think this would be something we would be interested in considering in our next investigation.
As above, we would if we didnât think there was a sufficiently strong worldview by which TCF was the best option we knew of. This could be because of a combination of the meat eater problem, and that we think itâs just not plausible to discount animals. Itâs an interesting question, but itâs also one where Iâm not sure our comparative advantage is coming to a view on it (though perhaps, just as we did with the view that GW should focus on economic progress, we could still discuss it in our evaluation).
Thanks for the thoughtful reply, and being transparent about your approach, Michael! Strongly upvoted.
To better reflect how your different recommendations are linked to particular worldviews, I think it would be good to change the name of your area/âfund âglobal health and wellbeingâ to âglobal human health and wellbeingâ (I would also drop âhealthâ, as it is included in âwellbeingâ). Another reason for this is that Open Philâs area âglobal health and wellbeingâ encompasses both human and animal welfare.
I think it would be great if you looked into this at least a little.
I think it makes sense GWWCâs recommendations are informed by the research team. However, I wonder how much of your and Sjirâs views are being driven by path dependence. GWWCâs pledge donations from 2020 to 2022 towards improving human wellbeing were 9.29 (= 0.65/â0.07) times those towards improving animal welfare. Given this, I worry you may hesitate to recommend interventions in animal welfare over human welfare even if you found it much more plausible that both areas should be assessed under the same (impartial welfarist) worldview. This might be a common issue across evaluators. Maybe some popular evaluators realised at some point that rating charities by overhead was not a plausible worldview, but meanwhile they had built a reputation for assessing them along that metric, and had influenced significant donations based on such rankings, so they continued to produce them. I hope GWWC remains attentive to this.
Thanks Vasco, this is good feedback.
We considered a wide variety of names, and after some deliberation (and a survey or two), we landed on âglobal health and wellbeingâ because we think it is a good balance of accurate and compelling. I agree with some the limitations you outlined, and like your alternative suggestion, especially from a âResearcherâsâ point of view where Iâm very focused on. Iâll share this with the team, but I expect that there would be too much cost switch at this point.
Itâs a bit tricky to respond to this having not (at least yet) done an analysis comparing animal versus human interventions. But for if/âwhen we do, I agree it would be important to be aware of the incentives you mentioned, and to avoid making decisions based on path dependencies rather than high quality research. More generally, a good part of our motivation for this project was to help create better incentives for the effective giving ecosystem. So weâd see coming to difficult decisions on cause-prioritisation, if we thought they were justified, as very much within the scope of our work and a way it could add value.
Thanks, Michael!
Makes sense!