“The key point is that plenty of knowledge and data will be dismissed, never published, and/or never encountered by the people with funding to effect change (such as EA grant-makers) simply because of its producers’ position within the geopolitics of knowledge. ” This is terrible. And I believe EA doescare about solving it, though maybe not as much as it should.
”As much as any other part of society, power/knowledge shapes academic research.” Science is not the same as social media, but we are in full agreement that it is subject to influence and bias. I would say EA is highly interested in decreasing that power/influence dynamic.
”EA may be inflicting epistemic violence, imperialism, and coloniality through the solutions it funds and research design it undertakes.” Aid is optional and voluntary. Even so I believe indigenous ways of thinking and living will be unintentionally damaged and lost by charitable efforts. I believe this is acceptable if the good is greater than the harm, though we may be unqualified to determine the extent of the harm. I don’t think simply consulting and asking how indigenous peoples want to be helped would address your critique? The alternative, as I understand it, is to operate via their ways of being, (if it is even possible) which is unclear how to speak their language, absorb their morals, understand their lives, and do this all efficiently so as to provide the most benefit. We already recognize that giving directly and cash transfers are some of the most effective ways to assist.
”When common pool resources are attributed monetary value, they become ontologically fungible” Its necessary to compare values and prioritize actions. Without trying to estimate value and compare across them, we resort to the art of acting based on general principles and we risk worse imbalanced actions. I am open to alternative methods but I maintain they need to be as universal and grounded in truth as possible. This is because we are acting for the world, and all its cultures. While I am sympathetic that we often steer wrong—for example by favoring legible metrics over unquantifiable unknowns and often asking the wrong questions—we at least acknowledge these problems and are making efforts to combat known dangers. Again, as far as I know, there are no better alternatives in other cultures. We all struggle with this.
”It is colonial for EA to believe that it can know what will be most the effective solution for people in the Global South, without even consulting them”—This is why EA starts with “saving lives” as it seems to be universally valuable. And EA does consult with the global south, as shown by give directly, cash transfers, and many other EA efforts. (Because it works according to measured outcomes, not because it avoids neocolonialism.)
”Through anticipatory philanthropic pledges, the ultrawealthy gain social capital. ” Are you saying the world is worse off every time massive donations are made—it does more damage than benefit? I assume you are not going that far. If all donations stopped, I think the ultrawealthy would still gain social capital from their wealth in other (worse) ways. Even if I agree capitalistic systems are a horrible trap, I’m not sure that altruistic donations have that much to do with perpetuating capitalism. I don’t think capitalism would be closer to falling apart or be revealed as a scam. I don’t think charitable giving especially subverts judgement of societies/cultures/systems. Its an observed rate of donations. We can compare it to rates of charitable effort under other systems/cultures/situations/regulations. If its better, its better. If its worse, its worse.
Many thanks for your engagement with my work. I apologise for the delay in replying; I’ve been on holiday since posting, and wanted to wait until I could compose a full response.
Let me offer my thoughts to each of yours.
EA missing and ignoring knowledge: I agree, the current geopolitics of knowledge production and consumption represents a terrible state of affairs – and a systemic one, not limited to EA. The core question is whether EA’s epistemic architecture (in terms of what perspectives and evidence-sources it admits) allow EA to see the most urgent problems and effective solutions? I’m arguing that the EA community needs to do some serious soul-searching on this point, and discuss the systemic blind-spots that follow from its epistemic architecture around maximization and efficiency.
‘Overcoming’ power/knowledge: Glad we agree on this! The challenge, nevertheless, is that power/knowledge is not something that can be overcome per se. Whilst positionality can be recognised, mitigated against to some extent, and the most egregious cases of bias within scientific research must be called out (see Caroline Criado Perez’s work, for instance), power/knowledge will always persist, simply in a mutated form. Attempting to strive for objectivity is like striving to become a god (Donna Haraway [1988] calls it a ‘god-trick’). Perhaps what I’m asking from EA is a little more humility and self-reflection?
& 5. Coloniality: You summarise the issue I’m raising really well with your clause: “we may be unqualified to determine the extent of the harm”. An EA policy intervention in the Global South may be the least harmful available, but an EA researcher in the Global North cannot know this. In response to your point #5, this is what makes EA epistemologically colonial: even if EA does consult with the Global South, because the ranking of interventions takes place via the global North, it remains suffused with coloniality. Let’s celebrate that EA gives cash directly; I’m not disputing that this is a really important, effective way to assist people and materially address some of the injustices of extractivism. However, what I want to highlight as potentially colonial is that EA only backs this intervention because it performs well in peer-reviewed ‘measured outcomes’. In other words, it’s the difference between giving a community $1000 in solidarity with them and their own struggles, to spend as they see fit, versus giving the community $1000 because several scientific papers tell us that is most effective. They might achieve the same thing, but there is a major epistemic difference between the two.
Prioritization: I agree that we need to prioritize actions, to ensure we do not further imbalance the world. Thank you for acknowledging that EA may miss unquantifiable unknowns. However, I am concerned by your statement: “This is because we are acting for the world, and all its cultures”. I find the paternalism and omniscience here disquieting, because it sets up a kind of god complex through which the EA community can believe it has a duty to know on behalf of everyone, and apply its methods universally, forgetting the positionality of the comparatively tiny community that developed its moral code. Why can’t methods of comparison be plural—different methods of comparison for different situations, depending on what is most important to protect or maximize in a given context (e.g., life, happiness, long-term health etc.)?
6. Ultrawealthy philanthropy: I’m saying that, firstly, massive donations are a major way through which extreme wealth inequality is normalized as a benevolent thing, and secondly, that we need to consider the net good of any EA intervention, adjusted on the basis of the negative externalities contained within that money. (Net good may remain positive, of course—though not as effective, perhaps, as never having that inequality in the first place. Some data here would be useful, I agree). I may have laboured my point about social capital deflecting attention, but I stand behind the argument that philanthropy is an accumulation strategy and justification against policy interventions to end extreme wealth inequality. I’d argue that the most effective thing here would be a wealth gap to more evenly distribute wealth across society.
Thanks again for your comments; I hope my remarks clarify some areas of misunderstanding.
1) I’m concerned with our lack of awareness, and obstacles to gaining awareness (our epistemic architecture). I am concerned with the deafening silence in science from many regions of the world. I am okay with EA restricting its views to those most likely to be universal, but this takes being humble and self-aware.
4) EA only backs this intervention because it performs well in peer-reviewed ‘measured outcomes’. In other words, it’s the difference between giving a community $1000 in solidarity with them and their own struggles, to spend as they see fit, versus giving the community $1000 because several scientific papers tell us that is most effective.
I am for reduced certainty in the face of so much unaccounted for, and far more respect for autonomy.
When it comes to relying on measured outcomes, I’m not sure what choice we have. I often hear that measured outcomes are illegitimate. “Incomplete” I can agree with. Values like equality, representation, evidence, fairness, and prosperity may be arbitrary and colonial, but they are tailored for contexts of populous intercultural conflicts concerning material things.* I’m honestly doubtful that other value systems are better in this context. (but looking for recommendations!) If we do not use measured outcomes, then what do we do instead?
*EA fails at fulfilling spiritual needs. I think this is because spiritual fulfillment does not transfer between contexts, but I am interested in finding more effective ways to improve spiritual fulfillment. It is highly likely it does not look like EA/colonial systems.
Yes, I am using my value system to legitimize my value system, but the obstacles remains even when following the resounding calls to listen more, transfer sovereignty, lift up etc. We are still using those value systems to legitimize themselves. Nor is it as simple as unconditionally accepting all value systems simultaneously. Assuming total ignorance is obviously worse: giving equally to powerful and powerless! Instead we must somehow average value systems together. I believe the colonialist science approach was built for the sake of attempting to do that neutrally. Now there might be a much better way to do it, (I think you are suggesting this!) and it would be insanely valuable to have a better method/set of methods. As of now, I’m not sure what it is. I can only heartily agree on the meta-level that we keep searching for something better. Averaging between value systems, trying to jump outside my own value system, looks like measured outcomes as far as I can tell, despite the colonial roots. (-?)
Recognizing our sizable ignorance is obviously correct; our best guess is almost certainly wrong! I think more “methods of comparison for different situations, depending on what is most important to protect or maximize in a given context (e.g., life, happiness, long-term health etc.)” is enormously good and I would say already a core part of most EA efforts! I think we attempt several in a sort of “insurance” against being wrong.
We ought to promote autonomy and sovereignty way more. I am realizing this more and more throughout this discussion.
*As a metaphor: if someone is putting their lives at risk when rock climbing, there are times it is right to intervene and there are times it is right to respect their autonomy. There are many points to consider in such complex decisions: your relationship, how well you know them, their age, their history of decisions, their joy from rock climbing, etc etc. I think this is the same with cultures. Sometimes it is right to briefly supersede their autonomy, but only in the most clearly egregious circumstances. Autonomy is so highly valuable as to supersede acting “for their sake” according to our own values of reducing self-harm. This is hard to see. Really hard to see. And I thank you for bring it up and pointing it out.
4) “This is because we are acting for the world, and all its cultures”. I find the paternalism and omniscience here disquieting, because it sets up a kind of god complex through which the EA community can believe it has a duty to know on behalf of everyone, and apply its methods universally, forgetting the positionality of the comparatively tiny community that developed its moral code.
I do not mean that EA knows best, quite the opposite! EA is only sure that it does not know, so it is trying to take the least assumptive actions which are most likely to be shared, most likely to be true for most people now and in the future. That we A) ought not to shirk the power we have to do good, B) must attempt to work for the sake of all, not just a few. I am not at all comfortable that EA is doing it right.
To summarize:
The unknown unknowns, the known unknowns, and the difficult-to-measure values are extremely important and neglected. We should do more to address that, even though it is hard.
Assuming ignorance and trying to act on values which are most likely to be shared and future-proof is highly important.
We must remain humble, critical of our methods, and incorporate ever more viewpoints so we may work as universally as possible.
There is not one method that works in all circumstances, but many methods for many different contexts.
Autonomy has great value that supersedes most other values.
This must be taken extremely seriously, even against generally “safe” values like saving lives and reducing disease.
“The key point is that plenty of knowledge and data will be dismissed, never published, and/or never encountered by the people with funding to effect change (such as EA grant-makers) simply because of its producers’ position within the geopolitics of knowledge. ” This is terrible. And I believe EA does care about solving it, though maybe not as much as it should.
”As much as any other part of society, power/knowledge shapes academic research.” Science is not the same as social media, but we are in full agreement that it is subject to influence and bias. I would say EA is highly interested in decreasing that power/influence dynamic.
”EA may be inflicting epistemic violence, imperialism, and coloniality through the solutions it funds and research design it undertakes.” Aid is optional and voluntary. Even so I believe indigenous ways of thinking and living will be unintentionally damaged and lost by charitable efforts. I believe this is acceptable if the good is greater than the harm, though we may be unqualified to determine the extent of the harm. I don’t think simply consulting and asking how indigenous peoples want to be helped would address your critique? The alternative, as I understand it, is to operate via their ways of being, (if it is even possible) which is unclear how to speak their language, absorb their morals, understand their lives, and do this all efficiently so as to provide the most benefit. We already recognize that giving directly and cash transfers are some of the most effective ways to assist.
”When common pool resources are attributed monetary value, they become ontologically fungible” Its necessary to compare values and prioritize actions. Without trying to estimate value and compare across them, we resort to the art of acting based on general principles and we risk worse imbalanced actions. I am open to alternative methods but I maintain they need to be as universal and grounded in truth as possible. This is because we are acting for the world, and all its cultures. While I am sympathetic that we often steer wrong—for example by favoring legible metrics over unquantifiable unknowns and often asking the wrong questions—we at least acknowledge these problems and are making efforts to combat known dangers. Again, as far as I know, there are no better alternatives in other cultures. We all struggle with this.
”It is colonial for EA to believe that it can know what will be most the effective solution for people in the Global South, without even consulting them”—This is why EA starts with “saving lives” as it seems to be universally valuable. And EA does consult with the global south, as shown by give directly, cash transfers, and many other EA efforts. (Because it works according to measured outcomes, not because it avoids neocolonialism.)
”Through anticipatory philanthropic pledges, the ultrawealthy gain social capital. ” Are you saying the world is worse off every time massive donations are made—it does more damage than benefit? I assume you are not going that far. If all donations stopped, I think the ultrawealthy would still gain social capital from their wealth in other (worse) ways. Even if I agree capitalistic systems are a horrible trap, I’m not sure that altruistic donations have that much to do with perpetuating capitalism. I don’t think capitalism would be closer to falling apart or be revealed as a scam. I don’t think charitable giving especially subverts judgement of societies/cultures/systems. Its an observed rate of donations. We can compare it to rates of charitable effort under other systems/cultures/situations/regulations. If its better, its better. If its worse, its worse.
Hi EcologyInterventions,
Many thanks for your engagement with my work. I apologise for the delay in replying; I’ve been on holiday since posting, and wanted to wait until I could compose a full response.
Let me offer my thoughts to each of yours.
EA missing and ignoring knowledge: I agree, the current geopolitics of knowledge production and consumption represents a terrible state of affairs – and a systemic one, not limited to EA. The core question is whether EA’s epistemic architecture (in terms of what perspectives and evidence-sources it admits) allow EA to see the most urgent problems and effective solutions? I’m arguing that the EA community needs to do some serious soul-searching on this point, and discuss the systemic blind-spots that follow from its epistemic architecture around maximization and efficiency.
‘Overcoming’ power/knowledge: Glad we agree on this! The challenge, nevertheless, is that power/knowledge is not something that can be overcome per se. Whilst positionality can be recognised, mitigated against to some extent, and the most egregious cases of bias within scientific research must be called out (see Caroline Criado Perez’s work, for instance), power/knowledge will always persist, simply in a mutated form. Attempting to strive for objectivity is like striving to become a god (Donna Haraway [1988] calls it a ‘god-trick’). Perhaps what I’m asking from EA is a little more humility and self-reflection?
& 5. Coloniality: You summarise the issue I’m raising really well with your clause: “we may be unqualified to determine the extent of the harm”. An EA policy intervention in the Global South may be the least harmful available, but an EA researcher in the Global North cannot know this. In response to your point #5, this is what makes EA epistemologically colonial: even if EA does consult with the Global South, because the ranking of interventions takes place via the global North, it remains suffused with coloniality. Let’s celebrate that EA gives cash directly; I’m not disputing that this is a really important, effective way to assist people and materially address some of the injustices of extractivism. However, what I want to highlight as potentially colonial is that EA only backs this intervention because it performs well in peer-reviewed ‘measured outcomes’. In other words, it’s the difference between giving a community $1000 in solidarity with them and their own struggles, to spend as they see fit, versus giving the community $1000 because several scientific papers tell us that is most effective. They might achieve the same thing, but there is a major epistemic difference between the two.
6. Ultrawealthy philanthropy: I’m saying that, firstly, massive donations are a major way through which extreme wealth inequality is normalized as a benevolent thing, and secondly, that we need to consider the net good of any EA intervention, adjusted on the basis of the negative externalities contained within that money. (Net good may remain positive, of course—though not as effective, perhaps, as never having that inequality in the first place. Some data here would be useful, I agree). I may have laboured my point about social capital deflecting attention, but I stand behind the argument that philanthropy is an accumulation strategy and justification against policy interventions to end extreme wealth inequality. I’d argue that the most effective thing here would be a wealth gap to more evenly distribute wealth across society.
Thanks again for your comments; I hope my remarks clarify some areas of misunderstanding.
Thank you for your thoughtful response!
1) I’m concerned with our lack of awareness, and obstacles to gaining awareness (our epistemic architecture). I am concerned with the deafening silence in science from many regions of the world. I am okay with EA restricting its views to those most likely to be universal, but this takes being humble and self-aware.
4) EA only backs this intervention because it performs well in peer-reviewed ‘measured outcomes’. In other words, it’s the difference between giving a community $1000 in solidarity with them and their own struggles, to spend as they see fit, versus giving the community $1000 because several scientific papers tell us that is most effective.
I am for reduced certainty in the face of so much unaccounted for, and far more respect for autonomy.
When it comes to relying on measured outcomes, I’m not sure what choice we have. I often hear that measured outcomes are illegitimate. “Incomplete” I can agree with. Values like equality, representation, evidence, fairness, and prosperity may be arbitrary and colonial, but they are tailored for contexts of populous intercultural conflicts concerning material things.* I’m honestly doubtful that other value systems are better in this context. (but looking for recommendations!) If we do not use measured outcomes, then what do we do instead?
*EA fails at fulfilling spiritual needs. I think this is because spiritual fulfillment does not transfer between contexts, but I am interested in finding more effective ways to improve spiritual fulfillment. It is highly likely it does not look like EA/colonial systems.
Yes, I am using my value system to legitimize my value system, but the obstacles remains even when following the resounding calls to listen more, transfer sovereignty, lift up etc. We are still using those value systems to legitimize themselves. Nor is it as simple as unconditionally accepting all value systems simultaneously. Assuming total ignorance is obviously worse: giving equally to powerful and powerless! Instead we must somehow average value systems together. I believe the colonialist science approach was built for the sake of attempting to do that neutrally. Now there might be a much better way to do it, (I think you are suggesting this!) and it would be insanely valuable to have a better method/set of methods. As of now, I’m not sure what it is. I can only heartily agree on the meta-level that we keep searching for something better. Averaging between value systems, trying to jump outside my own value system, looks like measured outcomes as far as I can tell, despite the colonial roots. (-?)
Recognizing our sizable ignorance is obviously correct; our best guess is almost certainly wrong! I think more “methods of comparison for different situations, depending on what is most important to protect or maximize in a given context (e.g., life, happiness, long-term health etc.)” is enormously good and I would say already a core part of most EA efforts! I think we attempt several in a sort of “insurance” against being wrong.
We ought to promote autonomy and sovereignty way more. I am realizing this more and more throughout this discussion.
*As a metaphor: if someone is putting their lives at risk when rock climbing, there are times it is right to intervene and there are times it is right to respect their autonomy. There are many points to consider in such complex decisions: your relationship, how well you know them, their age, their history of decisions, their joy from rock climbing, etc etc. I think this is the same with cultures. Sometimes it is right to briefly supersede their autonomy, but only in the most clearly egregious circumstances. Autonomy is so highly valuable as to supersede acting “for their sake” according to our own values of reducing self-harm. This is hard to see. Really hard to see. And I thank you for bring it up and pointing it out.
4) “This is because we are acting for the world, and all its cultures”. I find the paternalism and omniscience here disquieting, because it sets up a kind of god complex through which the EA community can believe it has a duty to know on behalf of everyone, and apply its methods universally, forgetting the positionality of the comparatively tiny community that developed its moral code.
I do not mean that EA knows best, quite the opposite! EA is only sure that it does not know, so it is trying to take the least assumptive actions which are most likely to be shared, most likely to be true for most people now and in the future. That we A) ought not to shirk the power we have to do good, B) must attempt to work for the sake of all, not just a few. I am not at all comfortable that EA is doing it right.
To summarize:
The unknown unknowns, the known unknowns, and the difficult-to-measure values are extremely important and neglected. We should do more to address that, even though it is hard.
Assuming ignorance and trying to act on values which are most likely to be shared and future-proof is highly important.
We must remain humble, critical of our methods, and incorporate ever more viewpoints so we may work as universally as possible.
There is not one method that works in all circumstances, but many methods for many different contexts.
Autonomy has great value that supersedes most other values.
This must be taken extremely seriously, even against generally “safe” values like saving lives and reducing disease.