So the argument seems wrong right off the bat, but let’s go through the rest.
What moral duties do we have towards people in absolute poverty, animals in factory farms, or future generations?
What would a virtuous person do to help those in absolute poverty, animals in factory farms, or future generations?
What oppressive social systems are responsible for the most suffering in the world, and what can be done to dismantle them?
How should our social and political institutions be structured so as to properly represent the interests of all persons, or all sentient creatures?
None of these are plausible as questions that would define EA. The question that defines EA will be something like “how can we maximize welfare with our spare time and money?” Questions 3 and 4 are just pieces of answering this question, just like asking “when will AGI be developed” and other more narrow technical questions. Questions 1 and 2 are misstatements as “moral duties” is too broad for EA and “virtue” is just off the mark. The correct answer to Q1 and Q2, leaving aside the issues with your assumption of 3 specific cause areas, is to “go to EA, and ask the question that they ask, and accept the answer.”
While some readers may phrase these questions somewhat differently, my point is simply to emphasise that the question you ask depends upon your ideology.
It’s certainly true that, for instance, negative utilitarians will ask about reducing suffering, and regular utilitarians will ask about maximizing overall welfare, so yes people with different moral theories will ask different questions.
And empirical claims are not relevant here. Instead, moral claims (utilitarianism, etc) beget different questions (suffering, etc).
You can definitely point out that ideologies, also, beget different questions. But saying that EAs have an ideology because they produce certain questions is affirming the consequent.
effective altruists tend to have a very particular approach to answering these questions.
Climate scientists do not go to sociology or ethnography or phenomenology (and so on) to figure out why the world is warming. That’s because these disciplines aren’t as helpful for answering their questions. So here we see how your definition of ideology is off the mark: having a particular approach can be a normal part of a system of inquiry.
Note that I am not taking a position on whether it is correct to privilege the types of evidence or methodologies that EA typically does. Rather, my point is simply that effective altruists seem to have very strong norms about what sorts of analysis is worthwhile doing,
Very strong norms? I have never seen anything indicating that we have very strong norms. You have not given any evidence of that—you’ve merely observed that we tend to do things in certain ways.
despite the fact that relatively little time is spent in the community discussing these issues.
That’s because we’re open for people to do whatever they think can work. If you want to use ethnography, go ahead.
I therefore content that these methodological choices are primarily the result of ideological preconceptions about how to go about answering questions
First, just because EAs have ideological preconceptions doesn’t mean that EA as a movement and concept are ideological in character. Every group of people has many people with ideological preconceptions. If this is how you define ideologies then again you have a definition which is both vacuous and inconsistent with the common usage.
Second, there are a variety of reasons to adopt a certain methodological approach other than an ideological preconception. It could be the case that I simply don’t have the qualifications or cognitive skills to take advantage of a certain sort of research. It could be the case that I’ve learned about the superiority of a particular approach somewhere outside EA and am merely porting knowledge from there. Or it could be the case that I think that selecting a methodology is actually a pretty easy task that doesn’t require “extensive analysis of pros and cons.”
My point is rather that these reasons are not generally discussed by EAs.
For one thing, again, we could know things from valid inquiry which happened outside of EA. You keep hiding behind a shield of “sure, I know all you EAs have lots of reasons to explain why these are inferior causes” but that neglects the fact that many of these reasons are circulated outside of EA as well. We don’t need to reinvent the wheel. That’s not to say that there’s no point discussing them; of course there is. But the point is that you don’t have grounds to say that this is anything ‘ideological.’
And we do discuss these topics. Your claim “They have not even been on the radar” is false. I have dealt with the capitalism debate in an extensive post here; turns out that not only does economic revolution fail to be a top cause, but it is likely to do more harm than good. Many other EAs have discussed capitalism and socialism, just not with the same depth as I did. Amanda Askell has discussed the issue of Pascal’s Wager, and with a far more open-minded attitude than any of the atheist-skeptic crowd have. I have personally thought carefully about the issue of religion as well; my conclusion is that theological ideas are too dubious and conflicting to provide good guidance and mostly reduce to a general imperative to make the world more stable and healthy (for one thing, to empower our future descendants to better answer questions about phil. religion and theology). Then there is David Denkenberger’s work on backup food sources, primarily to survive nuclear winter but also applicable for other catastrophes. Then there is our work on climate change which indirectly contributes to keeping the biosphere robust. Instead of merely searching the EA forum, you could have asked people for examples.
Finally, I find it a little obnoxious to expect people to preemptively address every possible idea. If someone thinks that a cause is valuable, let them give some reasons why, and we’ll discuss it. If you think that an idea’s been neglected, go ahead and post about it. Do your part to move EA forward.
Yet the GPI research agenda focuses almost entirely on technical issues in philosophy and economics pertaining to the long-termism paradigm
They didn’t assume that ex nihilo. They have reasons for doing so.
The Fermi Paradox is directly relevant to estimating the probability of human extinction and therefore quite relevant for judging our approach to growth and x-risk.
or even substantially reforming global economic institutions
Hence, even though it seems that in principle socialists, Buddhists, and ecological activists (among others) are highly concerned about reducing the suffering of humans and animals, FRI ignores the topics that these groups would tend to focus on,
Huh? Tomasik’s writings have extensively debunked naive ecological assumptions about reducing suffering, this has been a primary target from the beginning. It seems like you’re only looking at FRI and not all of Tomasik’s essays which form the background.
Anything which just affects humans, whether it’s a matter of socialism or Buddhism or anything else, is not going to be remotely as important as AI and wildlife issues under FRI’s approach. Tomasik has quantified the numbers of animals and estimated sentience; so have I and others.
As in the case of FHI, they also seem to find room for some topics of highly dubious relevance to any of EAs goals, such as this paper about the potential for correlated actions with civilizations located elsewhere in the multiverse
You don’t see how cooperation across universes is relevant for reducing suffering?
I’ll spell it out in basic terms. Agents have preferences. When agents work together, more of their preferences are satisfied. Conscious agents generally suffer less when their preferences are satisfied. Lastly, the multiverse could have lots of agents in it.
That said, on the basis of my research and experiences in the movement, I would suggest the following core tenets of EA ideology
Except we’re all happy to revise them while remaining EAs, if good arguments and evidence appear. Except maybe some kind of utilitarianism, but as I said moral theories are not sufficient for ideology.
In fact, my view is that we can’t really get anywhere in rational investigation without certain starting assumptions, and these starting assumptions constitute our ideology
No, they constitute our priors.
We should critically analyse this ideology, understand its strengths and weaknesses
Except we are constantly dealing with analysis and argument of the strengths and weaknesses of utilitarianism, of atheism, and so on. This not anything new to us. Just because your search on the EA forum didn’t turn anything up doesn’t mean we’re not familiar with them.
This is essentially what all other ideologies do – it is how the exchange of ideas works
No, all other ideologies do not critically analyze themselves and understand their strengths and weaknesses. They present their own strengths, and attack others’ weaknesses. It is up to other people—like EAs—to judge the ideologies from the outside.
and not pretend they are aloof from it by resorting to the refrain that ’EA is a question, not an ideology
This is just a silly comment. When we say that EA is a question, we’re inviting you to tell us why utilitarianism is wrong, why Bayesianism is wrong, etc. This is exactly what you are advocating for.
Now I took a look at your claims about things we’re “ignoring” like sociological theory and case studies. First, you ought to follow the two-paper rule, a field needs to show relevance. Not every line of work is valuable. Just because they have peer review and the author has a PhD doesn’t mean they’ve accomplished something that is useful for our particular purposes. We have limited time to read stuff.
Second, some of that research really is irrelevant for our purposes. There are a lot of reasons why. Sometimes the research is just poor quality or not replicable or generalizeable; social theory can be vulnerable to this. Another issue is that here in EA we are worried about one movement and just a few high priority causes. We don’t need a sweeping theory of social change, we need to know what works right here and now. This means that domain knowledge (e.g. surveys, experiments, data collection) for our local context is more important than obtaining a generic theory of history and society. Finally, if you think phenomenology and existentialism are relevant, I think you just don’t understand utilitarianism. You want to look at theories of well-being, not these other subfields of philosophy. But even when it comes to theories of well-being, the philosophy is mostly irrelevant because happiness/preferences/etc match up pretty well for all practical intents and purposes (especially given measurement difficulties—we are forced to rely on proxy measures like GDP and happiness surveys anyway).
Third, your claim that these approaches are generally ignored is incorrect.
Historical case studies—See the work on history of philanthropy and field growth done by Open Phil. See the work of ACE on case studies for social movements. And I’m currently working on a project for case studies assessing the risk of catastrophic risks, an evaluation of historical societies to see what they knew and might have done to prevent GCRs if they had thought about it at the time.
Regression analysis—Just… what? This one is bizarre. No one here is ideologically biased against any particular kind of statistical model, we have cited plenty of papers which use regressions.
Overall, I’m pretty disappointed that the EA community has upvoted this post so much, when its arguments are heavily flawed. I think we are overly enthusiastic about anyone who criticizes us, rather than judging them with the same rigor that we judge anyone else.
Another example contradicting your claims about EA: in Candidate Scoring System I went to extensive detail about methodological pluralism. It nails all of the armchair philosophy-of-science stuff about how we need to be open minded about sociological theory and so on. I have a little bit of ethnography in it (my personal observations). It is where I delved into capitalism and socialism as well. It’s written very carefully to dodge the all-too-predictable critique that you’re making here. And what I have done to make it this way has taken up an extensive amount of time, adding no clear amount of accuracy to the results. So you can see it’s a little annoying when someone criticizes the EA movement yet again, without knowledge of this recent work.
Yet, at the end of the day, most of the papers I actually cite are standard economics, criminological studies, political science, and so on. Not ethnographies or sociological theories. Know why? Because when I see ethnographies and sociological theory papers, and I read the abstract and/or the conclusion or skim the contents, I don’t see them giving any information that matters. I can spend all day reading about how veganism apparently green-washes Israel, for instance, but that’s not useful for deciding what policy stance to take towards Israel or farming. It’s just commentary. You are making an incorrect assumption that every line of scholarship that vaguely addresses a topic is going to be useful for EAs who want to actually make progress on it. This is not a question of research rigor, it’s simple facts about what these papers are actually aiming at.
You know what it would look like, if I were determined to include all this stuff? “Butler argues that gender is a social construct. However, just because gender is a social construct doesn’t tell us how quality of life will change if the government bans transgender workplace discrimination. Yeates argues that migration transforms domestic caring into global labor chains. However, just because migration creates global care chains doesn’t tell us how quality of life will change if the government increases low-skill immigration.” And on and on and on. Of course it would be a little more nuanced and complex but you get the idea. Are you interested in sifting through pages of that sort of prose? Does it get us closer to understanding how to improve the world?
So the argument seems wrong right off the bat, but let’s go through the rest.
None of these are plausible as questions that would define EA. The question that defines EA will be something like “how can we maximize welfare with our spare time and money?” Questions 3 and 4 are just pieces of answering this question, just like asking “when will AGI be developed” and other more narrow technical questions. Questions 1 and 2 are misstatements as “moral duties” is too broad for EA and “virtue” is just off the mark. The correct answer to Q1 and Q2, leaving aside the issues with your assumption of 3 specific cause areas, is to “go to EA, and ask the question that they ask, and accept the answer.”
It’s certainly true that, for instance, negative utilitarians will ask about reducing suffering, and regular utilitarians will ask about maximizing overall welfare, so yes people with different moral theories will ask different questions.
And empirical claims are not relevant here. Instead, moral claims (utilitarianism, etc) beget different questions (suffering, etc).
You can definitely point out that ideologies, also, beget different questions. But saying that EAs have an ideology because they produce certain questions is affirming the consequent.
Climate scientists do not go to sociology or ethnography or phenomenology (and so on) to figure out why the world is warming. That’s because these disciplines aren’t as helpful for answering their questions. So here we see how your definition of ideology is off the mark: having a particular approach can be a normal part of a system of inquiry.
Very strong norms? I have never seen anything indicating that we have very strong norms. You have not given any evidence of that—you’ve merely observed that we tend to do things in certain ways.
That’s because we’re open for people to do whatever they think can work. If you want to use ethnography, go ahead.
First, just because EAs have ideological preconceptions doesn’t mean that EA as a movement and concept are ideological in character. Every group of people has many people with ideological preconceptions. If this is how you define ideologies then again you have a definition which is both vacuous and inconsistent with the common usage.
Second, there are a variety of reasons to adopt a certain methodological approach other than an ideological preconception. It could be the case that I simply don’t have the qualifications or cognitive skills to take advantage of a certain sort of research. It could be the case that I’ve learned about the superiority of a particular approach somewhere outside EA and am merely porting knowledge from there. Or it could be the case that I think that selecting a methodology is actually a pretty easy task that doesn’t require “extensive analysis of pros and cons.”
For one thing, again, we could know things from valid inquiry which happened outside of EA. You keep hiding behind a shield of “sure, I know all you EAs have lots of reasons to explain why these are inferior causes” but that neglects the fact that many of these reasons are circulated outside of EA as well. We don’t need to reinvent the wheel. That’s not to say that there’s no point discussing them; of course there is. But the point is that you don’t have grounds to say that this is anything ‘ideological.’
And we do discuss these topics. Your claim “They have not even been on the radar” is false. I have dealt with the capitalism debate in an extensive post here; turns out that not only does economic revolution fail to be a top cause, but it is likely to do more harm than good. Many other EAs have discussed capitalism and socialism, just not with the same depth as I did. Amanda Askell has discussed the issue of Pascal’s Wager, and with a far more open-minded attitude than any of the atheist-skeptic crowd have. I have personally thought carefully about the issue of religion as well; my conclusion is that theological ideas are too dubious and conflicting to provide good guidance and mostly reduce to a general imperative to make the world more stable and healthy (for one thing, to empower our future descendants to better answer questions about phil. religion and theology). Then there is David Denkenberger’s work on backup food sources, primarily to survive nuclear winter but also applicable for other catastrophes. Then there is our work on climate change which indirectly contributes to keeping the biosphere robust. Instead of merely searching the EA forum, you could have asked people for examples.
Finally, I find it a little obnoxious to expect people to preemptively address every possible idea. If someone thinks that a cause is valuable, let them give some reasons why, and we’ll discuss it. If you think that an idea’s been neglected, go ahead and post about it. Do your part to move EA forward.
They didn’t assume that ex nihilo. They have reasons for doing so.
The Fermi Paradox is directly relevant to estimating the probability of human extinction and therefore quite relevant for judging our approach to growth and x-risk.
They recommend an economics PhD track and recently discussed the importance of charter cities. More on charter cities: https://innovativegovernance.org/2019/07/01/effective-altruism-blog/
Huh? Tomasik’s writings have extensively debunked naive ecological assumptions about reducing suffering, this has been a primary target from the beginning. It seems like you’re only looking at FRI and not all of Tomasik’s essays which form the background.
Anything which just affects humans, whether it’s a matter of socialism or Buddhism or anything else, is not going to be remotely as important as AI and wildlife issues under FRI’s approach. Tomasik has quantified the numbers of animals and estimated sentience; so have I and others.
You don’t see how cooperation across universes is relevant for reducing suffering?
I’ll spell it out in basic terms. Agents have preferences. When agents work together, more of their preferences are satisfied. Conscious agents generally suffer less when their preferences are satisfied. Lastly, the multiverse could have lots of agents in it.
Except we’re all happy to revise them while remaining EAs, if good arguments and evidence appear. Except maybe some kind of utilitarianism, but as I said moral theories are not sufficient for ideology.
No, they constitute our priors.
Except we are constantly dealing with analysis and argument of the strengths and weaknesses of utilitarianism, of atheism, and so on. This not anything new to us. Just because your search on the EA forum didn’t turn anything up doesn’t mean we’re not familiar with them.
No, all other ideologies do not critically analyze themselves and understand their strengths and weaknesses. They present their own strengths, and attack others’ weaknesses. It is up to other people—like EAs—to judge the ideologies from the outside.
This is just a silly comment. When we say that EA is a question, we’re inviting you to tell us why utilitarianism is wrong, why Bayesianism is wrong, etc. This is exactly what you are advocating for.
Now I took a look at your claims about things we’re “ignoring” like sociological theory and case studies. First, you ought to follow the two-paper rule, a field needs to show relevance. Not every line of work is valuable. Just because they have peer review and the author has a PhD doesn’t mean they’ve accomplished something that is useful for our particular purposes. We have limited time to read stuff.
Second, some of that research really is irrelevant for our purposes. There are a lot of reasons why. Sometimes the research is just poor quality or not replicable or generalizeable; social theory can be vulnerable to this. Another issue is that here in EA we are worried about one movement and just a few high priority causes. We don’t need a sweeping theory of social change, we need to know what works right here and now. This means that domain knowledge (e.g. surveys, experiments, data collection) for our local context is more important than obtaining a generic theory of history and society. Finally, if you think phenomenology and existentialism are relevant, I think you just don’t understand utilitarianism. You want to look at theories of well-being, not these other subfields of philosophy. But even when it comes to theories of well-being, the philosophy is mostly irrelevant because happiness/preferences/etc match up pretty well for all practical intents and purposes (especially given measurement difficulties—we are forced to rely on proxy measures like GDP and happiness surveys anyway).
Third, your claim that these approaches are generally ignored is incorrect.
For sociological theory—see ACE’s concepts of social change.
Historical case studies—See the work on history of philanthropy and field growth done by Open Phil. See the work of ACE on case studies for social movements. And I’m currently working on a project for case studies assessing the risk of catastrophic risks, an evaluation of historical societies to see what they knew and might have done to prevent GCRs if they had thought about it at the time.
Regression analysis—Just… what? This one is bizarre. No one here is ideologically biased against any particular kind of statistical model, we have cited plenty of papers which use regressions.
Overall, I’m pretty disappointed that the EA community has upvoted this post so much, when its arguments are heavily flawed. I think we are overly enthusiastic about anyone who criticizes us, rather than judging them with the same rigor that we judge anyone else.
Another example contradicting your claims about EA: in Candidate Scoring System I went to extensive detail about methodological pluralism. It nails all of the armchair philosophy-of-science stuff about how we need to be open minded about sociological theory and so on. I have a little bit of ethnography in it (my personal observations). It is where I delved into capitalism and socialism as well. It’s written very carefully to dodge the all-too-predictable critique that you’re making here. And what I have done to make it this way has taken up an extensive amount of time, adding no clear amount of accuracy to the results. So you can see it’s a little annoying when someone criticizes the EA movement yet again, without knowledge of this recent work.
Yet, at the end of the day, most of the papers I actually cite are standard economics, criminological studies, political science, and so on. Not ethnographies or sociological theories. Know why? Because when I see ethnographies and sociological theory papers, and I read the abstract and/or the conclusion or skim the contents, I don’t see them giving any information that matters. I can spend all day reading about how veganism apparently green-washes Israel, for instance, but that’s not useful for deciding what policy stance to take towards Israel or farming. It’s just commentary. You are making an incorrect assumption that every line of scholarship that vaguely addresses a topic is going to be useful for EAs who want to actually make progress on it. This is not a question of research rigor, it’s simple facts about what these papers are actually aiming at.
You know what it would look like, if I were determined to include all this stuff? “Butler argues that gender is a social construct. However, just because gender is a social construct doesn’t tell us how quality of life will change if the government bans transgender workplace discrimination. Yeates argues that migration transforms domestic caring into global labor chains. However, just because migration creates global care chains doesn’t tell us how quality of life will change if the government increases low-skill immigration.” And on and on and on. Of course it would be a little more nuanced and complex but you get the idea. Are you interested in sifting through pages of that sort of prose? Does it get us closer to understanding how to improve the world?