1) I’m concerned with our lack of awareness, and obstacles to gaining awareness (our epistemic architecture). I am concerned with the deafening silence in science from many regions of the world. I am okay with EA restricting its views to those most likely to be universal, but this takes being humble and self-aware.
4) EA only backs this intervention because it performs well in peer-reviewed ‘measured outcomes’. In other words, it’s the difference between giving a community $1000 in solidarity with them and their own struggles, to spend as they see fit, versus giving the community $1000 because several scientific papers tell us that is most effective.
I am for reduced certainty in the face of so much unaccounted for, and far more respect for autonomy.
When it comes to relying on measured outcomes, I’m not sure what choice we have. I often hear that measured outcomes are illegitimate. “Incomplete” I can agree with. Values like equality, representation, evidence, fairness, and prosperity may be arbitrary and colonial, but they are tailored for contexts of populous intercultural conflicts concerning material things.* I’m honestly doubtful that other value systems are better in this context. (but looking for recommendations!) If we do not use measured outcomes, then what do we do instead?
*EA fails at fulfilling spiritual needs. I think this is because spiritual fulfillment does not transfer between contexts, but I am interested in finding more effective ways to improve spiritual fulfillment. It is highly likely it does not look like EA/colonial systems.
Yes, I am using my value system to legitimize my value system, but the obstacles remains even when following the resounding calls to listen more, transfer sovereignty, lift up etc. We are still using those value systems to legitimize themselves. Nor is it as simple as unconditionally accepting all value systems simultaneously. Assuming total ignorance is obviously worse: giving equally to powerful and powerless! Instead we must somehow average value systems together. I believe the colonialist science approach was built for the sake of attempting to do that neutrally. Now there might be a much better way to do it, (I think you are suggesting this!) and it would be insanely valuable to have a better method/set of methods. As of now, I’m not sure what it is. I can only heartily agree on the meta-level that we keep searching for something better. Averaging between value systems, trying to jump outside my own value system, looks like measured outcomes as far as I can tell, despite the colonial roots. (-?)
Recognizing our sizable ignorance is obviously correct; our best guess is almost certainly wrong! I think more “methods of comparison for different situations, depending on what is most important to protect or maximize in a given context (e.g., life, happiness, long-term health etc.)” is enormously good and I would say already a core part of most EA efforts! I think we attempt several in a sort of “insurance” against being wrong.
We ought to promote autonomy and sovereignty way more. I am realizing this more and more throughout this discussion.
*As a metaphor: if someone is putting their lives at risk when rock climbing, there are times it is right to intervene and there are times it is right to respect their autonomy. There are many points to consider in such complex decisions: your relationship, how well you know them, their age, their history of decisions, their joy from rock climbing, etc etc. I think this is the same with cultures. Sometimes it is right to briefly supersede their autonomy, but only in the most clearly egregious circumstances. Autonomy is so highly valuable as to supersede acting “for their sake” according to our own values of reducing self-harm. This is hard to see. Really hard to see. And I thank you for bring it up and pointing it out.
4) “This is because we are acting for the world, and all its cultures”. I find the paternalism and omniscience here disquieting, because it sets up a kind of god complex through which the EA community can believe it has a duty to know on behalf of everyone, and apply its methods universally, forgetting the positionality of the comparatively tiny community that developed its moral code.
I do not mean that EA knows best, quite the opposite! EA is only sure that it does not know, so it is trying to take the least assumptive actions which are most likely to be shared, most likely to be true for most people now and in the future. That we A) ought not to shirk the power we have to do good, B) must attempt to work for the sake of all, not just a few. I am not at all comfortable that EA is doing it right.
To summarize:
The unknown unknowns, the known unknowns, and the difficult-to-measure values are extremely important and neglected. We should do more to address that, even though it is hard.
Assuming ignorance and trying to act on values which are most likely to be shared and future-proof is highly important.
We must remain humble, critical of our methods, and incorporate ever more viewpoints so we may work as universally as possible.
There is not one method that works in all circumstances, but many methods for many different contexts.
Autonomy has great value that supersedes most other values.
This must be taken extremely seriously, even against generally “safe” values like saving lives and reducing disease.
Thank you for your thoughtful response!
1) I’m concerned with our lack of awareness, and obstacles to gaining awareness (our epistemic architecture). I am concerned with the deafening silence in science from many regions of the world. I am okay with EA restricting its views to those most likely to be universal, but this takes being humble and self-aware.
4) EA only backs this intervention because it performs well in peer-reviewed ‘measured outcomes’. In other words, it’s the difference between giving a community $1000 in solidarity with them and their own struggles, to spend as they see fit, versus giving the community $1000 because several scientific papers tell us that is most effective.
I am for reduced certainty in the face of so much unaccounted for, and far more respect for autonomy.
When it comes to relying on measured outcomes, I’m not sure what choice we have. I often hear that measured outcomes are illegitimate. “Incomplete” I can agree with. Values like equality, representation, evidence, fairness, and prosperity may be arbitrary and colonial, but they are tailored for contexts of populous intercultural conflicts concerning material things.* I’m honestly doubtful that other value systems are better in this context. (but looking for recommendations!) If we do not use measured outcomes, then what do we do instead?
*EA fails at fulfilling spiritual needs. I think this is because spiritual fulfillment does not transfer between contexts, but I am interested in finding more effective ways to improve spiritual fulfillment. It is highly likely it does not look like EA/colonial systems.
Yes, I am using my value system to legitimize my value system, but the obstacles remains even when following the resounding calls to listen more, transfer sovereignty, lift up etc. We are still using those value systems to legitimize themselves. Nor is it as simple as unconditionally accepting all value systems simultaneously. Assuming total ignorance is obviously worse: giving equally to powerful and powerless! Instead we must somehow average value systems together. I believe the colonialist science approach was built for the sake of attempting to do that neutrally. Now there might be a much better way to do it, (I think you are suggesting this!) and it would be insanely valuable to have a better method/set of methods. As of now, I’m not sure what it is. I can only heartily agree on the meta-level that we keep searching for something better. Averaging between value systems, trying to jump outside my own value system, looks like measured outcomes as far as I can tell, despite the colonial roots. (-?)
Recognizing our sizable ignorance is obviously correct; our best guess is almost certainly wrong! I think more “methods of comparison for different situations, depending on what is most important to protect or maximize in a given context (e.g., life, happiness, long-term health etc.)” is enormously good and I would say already a core part of most EA efforts! I think we attempt several in a sort of “insurance” against being wrong.
We ought to promote autonomy and sovereignty way more. I am realizing this more and more throughout this discussion.
*As a metaphor: if someone is putting their lives at risk when rock climbing, there are times it is right to intervene and there are times it is right to respect their autonomy. There are many points to consider in such complex decisions: your relationship, how well you know them, their age, their history of decisions, their joy from rock climbing, etc etc. I think this is the same with cultures. Sometimes it is right to briefly supersede their autonomy, but only in the most clearly egregious circumstances. Autonomy is so highly valuable as to supersede acting “for their sake” according to our own values of reducing self-harm. This is hard to see. Really hard to see. And I thank you for bring it up and pointing it out.
4) “This is because we are acting for the world, and all its cultures”. I find the paternalism and omniscience here disquieting, because it sets up a kind of god complex through which the EA community can believe it has a duty to know on behalf of everyone, and apply its methods universally, forgetting the positionality of the comparatively tiny community that developed its moral code.
I do not mean that EA knows best, quite the opposite! EA is only sure that it does not know, so it is trying to take the least assumptive actions which are most likely to be shared, most likely to be true for most people now and in the future. That we A) ought not to shirk the power we have to do good, B) must attempt to work for the sake of all, not just a few. I am not at all comfortable that EA is doing it right.
To summarize:
The unknown unknowns, the known unknowns, and the difficult-to-measure values are extremely important and neglected. We should do more to address that, even though it is hard.
Assuming ignorance and trying to act on values which are most likely to be shared and future-proof is highly important.
We must remain humble, critical of our methods, and incorporate ever more viewpoints so we may work as universally as possible.
There is not one method that works in all circumstances, but many methods for many different contexts.
Autonomy has great value that supersedes most other values.
This must be taken extremely seriously, even against generally “safe” values like saving lives and reducing disease.