The main point I took from video was that Abigail is kinda asking the question: “How can a movement that wants to change the world be so apolitical?” This is also a criticism I have of many EA structures and people. I even have come across people who view EA and themselves as not political, even as they are arguing for longtermism. The video also highlights this.
When you are quantifying something you don’t become objective all over sudden. You cannot quantify everything, so you have to make a choice on what you want to quantify. And this is a political choice. There is not objective source of truth that tells you that for example quality adjusted life years are the best objective measure. People choose what makes the most sense to them given their background. But you could easily switch it to something else. There is only your subjective choice on what you want to focus. And I would really appreciate if this would be highlighted more in EA.
Right now the vibe is often “We have objectively compared such and such and therefore the obvious choice is this intervention or cause.” But this just frames personal preferences on what is important as an objective truth about the world. Would be great if this subjectivity would be acknowledged more.
And one final point the video also hints at: In EA basically all modern philosophy outside of consequentialism is ignored. Even though much of that philosophy is explicitly developed to crticise pure reason and consequentialism. But if you read EA material you get the impression to only notable philosophers of the 20th century are Peter Singer and Derek Parfit.
The main point I took from video was that Abigail is kinda asking the question: “How can a movement that wants to change the world be so apolitical?” This is also a criticism I have of many EA structures and people.
I think it’s surprising that EA is so apolitical, but I’m not convinced it’s wrong to make some effort to avoid issues that are politically hot. Three reasons to avoid such things: 1) they’re often not the areas where the most impact can be had, even ignoring constraints imposed by them being hot political topics 2) being hot political topics makes it even harder to make significant progress on these issues and 3) if EAs routinely took strong stands on such things, I’m confident it would lead to significant fragmentation of the community.
EA does take some political stances, although they’re often not on standard hot topics: they’re strongly in favour of animal rights and animal welfare, and were involved in lobbying for a very substantial piece of legislation recently introduced in Europe. Also, a reasonable number of EAs are becoming substantially more “political” on the question of how quickly the frontier of AI capabilities should be advanced.
It seems to me that we are talking about different definitions about what political means. I agree that in some situations it can make sense to not chip in political discussions, to not get pushed to one side. I also see that there are some political issues where EA has taken a stance like animal welfare. However, when I say political I mean what are the reason for us doing things and how do we convince other people of it? In EA there are often arguments that something is not political, because there has been an “objective” calculation of value. However, there is almost never a justification why something was deemed important, even though when you want to change the world in a different way, this is the important part. Or on a more practical level why are QUALYs seen as the best way to measure outcomes in many cases? Using this and not another measure is choice which has to be justified.
Alternatives to QALYs (such as WELLBYs) have been put forward from within the EA movement. But if we’re trying to help others, it seems plausible that we should do it in ways that they care about. Most people care about their quality of life or well-being, as well as the amount of time they’ll have to experience or realise that well-being.
I’m sure there are people who would say they are most effectively helping others by “saving their souls” or promoting their “natural rights”. They’re free to act as they wish. But the reason that EAs (and not just EAs, because QALYs are widely used in health economics and resource allocation) have settled on quality of life and length of life is frankly because they’re the most plausible (or least implausible) ways of measuring the extent to which we’ve helped others.
EA appears to largely ignore the developments of modern and post-modern philosophy, making EA appear like a genuinely new idea/movement. Which it is not. That means that there is a lot to learn from past instances of EA-like movements. EA-like meaning Western rich people trying to do good with Rationality. 20th century philosophy is brimming with very valid critiques of Rationality, but somehow EA seems to jump from Bentham/Mills to Singer/Parfit without batting an eye.
Abigail leaves open how we should do good, whether we want to pursue systemic change or work within the system, or even how we shall define what “good” is. I am sure this is intentionally put at the end of the video. She warns people who consider joining EA to do so with open eyes. I deeply agree with this. If you are thinking about making EA your political movement of choice, be very careful, as with any political movement. EA claims to be open to different moral standpoints, but it most certainly not. There are unchecked power dynamics at play, demographic bias, “thought leaders”, the primacy of Rationality. If I had any advice for anyone in EA, I would recommend they go and spend a year or more learning about all the philosophy that came AFTER utilitarianism*. Otherwise, EA will be lacking context, and could even appear as The Truth. You will be tempted to buy into the opinion of a small number of apparently smart people saying apparently smart things, and by that, hand over your moral decisions to them.
* (for a start, Philosophize This is a nice podcast that deals at length with a lot of these topics)
EA is a movement that aims to use reason and evidence to do the most good, so the centrality of “rationality” (broadly speaking) shouldn’t be too surprising. Many EAs are also deeply familiar with alternatives to utilitarianism. While most (according to the surveys) are utilitarians, some are non-utilitarian consequentialists or pluralists.
I suspect that the movement is dominated by utilitarians and utilitarian-leaning people because while all effective altruists shouldn’t necessarily be utilitarians, all utilitarians should be effective altruists. In contrast, it’s hard to see why a pure deontologist or virtue ethicist should, as a matter of philosophical consistency, be an effective altruist. It’s also difficult to see how a pure deontologist or virtue ethicist could engage in cause prioritisation decisions without ultimately appealing to consequences.
I want to clarify that I do specifically mean philosophical movements like existentialism, structuralism, post-structuralism, the ethics behind communism and fascism—which all were influential in the 20th century. I would also argue that the grouping into consequentialism/virtue ethics/deontology does not capture the perspectives brought up in the aforementioned movements. I would love to see EAs engage with more modern ideas about ethics because they specifically shed light on the flexibility and impermanence of the terms ‘reason’ and ‘evidence’ over the decades.
Sure, you have to choose some model at some point to act, or else you’ll be paralyzed. But I really wish that people who make significant life changes based on reason and evidence take a close look at how these terms are defined within their political movement, and by whom.
I don’t quite see how existentialism, structuralism, post-structuralism and fascism are going to help us be more effectively altruistic, or how they’re going to help us prioritise causes. Communism is a different case as in some formats it’s a potential altruistic cause area that people may choose to prioritise.
I also don’t think that these ideas are more “modern” than utilitarianism, or that their supposed novelty is a point in their favour. Fascism, just to take one of these movements, has been thoroughly discredited and is pretty much the antithesis of altruism. These movements are movements in their own right, and I don’t think they’d want EAs to turn them into something they’re not. The same is true in the opposite direction.
By all means, make an argument in favour of these movements or their relevance to EA. But claiming that EAs haven’t considered these movements (I have, and think they’re false) isn’t likely to change much.
Surely, they are more modern than utilitarianism. Utilitarianism has been developed in the 19th century, while all the other ones mentioned are from the 20th century. And it is not their “novelty” which is interesting, but that they are a direct follow up and criticism of things like utilitarianism. Also, I don’t think that post above was an endorsement of using fascism, but instead a call to understand the idea why people even started with fascism in the first place.
The main contribution of the above mentioned fields of ideas to EA is that they highlight that reason is not a strong tool, as many EA think it is. You can easily bring yourself into bad situation, even if you follow reason all the way. Reason is not something objective, but born from your standpoint in the world and the culture you grow up in.
And if EA (or you) have considered things like existentialism, structuralism, post-structuralism I’d love to see those arguments why it is not important to EA. Never seen anything in this regard.
I think reason is as close to an objective tool as we’re likely to get and often isn’t born from our standpoint in the world or the culture we grow up in. That’s why people from many different cultures have often reached similar conclusions, and why almost everyone (regardless of their background) can recognise logical and mathematical truths. It’s also why most people agree that the sun will rise the next morning and that attempting to leave your house from your upper floor window is a bad idea.
I think the onus is on advocates of these movements to explain their relevance to “doing the most good”. As for the various 20th Century criticisms of utilitarianism, my sense is that they’ve been parried rather successfully by other philosophers. Finally, my point about utilitarianism being just as modern is that it hasn’t in any way been superseded by these other movements — it’s still practiced and used today.
I think it’s fairly unsurprising that EA is mostly consequentialists or utilitarians. But often it goes way beyond that, into very specific niches that are not all a requirement for trying to “do good effectively”.
For example, a disproportionate amount of people here are are capital R “Rationalists”, referring to the subculture built around fans of the “sequences” blogposts on Lesswrong written by Yudkowsky. I think this subgroup in particular suffers from “not invented here” syndrome, where philosophical ideas that haven’t been translated into rationalist jargon are not engaged with seriously.
“There is not objective source of truth that tells you that for example quality adjusted life years are the best objective measure.”
There’s no objective source of truth telling humans to value what we value; on some level it’s just a brute fact that we have certain values. But given a set of values, some metrics will do better vs. worse at describing the values.
Or in other words: Facts about how much people prefer one thing relative to other things are “subjective” in the weak sense that all psychological facts are subjective: they’re about subjects / minds. But psychology facts aren’t “subjective” in a sense like “there are no facts of the matter about minds”. Minds are just as real a part of the world as chairs, electrons, and zebras.
Consider, for example, a measure that says “a sunburn is 1⁄2 as bad as a migraine” versus one that says “a sunburn is a billion times as bad as a migraine”. We can decompose this into a factual claim about the relative preferences of some group of agents, plus a normative claim that calls the things that group dislikes “bad”.
For practical purposes, the important contribution of welfare metrics isn’t “telling us that the things we dislike are bad”; realists are already happy to run with this, and anti-realists are happy to play along with the basic behavioral take-aways in practice.
Instead, the important contribution is the factual claim about what a group prefers, which is as objective/subjective as any other psych claim. Viewed through that lens, even if neither of the claims above is perfectly accurate, it seems clear that the “1/2 as bad” claim is a lot closer to the psychological truth.
I agree that the choices we make are in some sense political. But they’re not political in the sense that they involve party or partisan politics. Perhaps it would be good for EAs to get involved in that kind of politics (and we sometimes do, usually in an individual capacity), but I personally don’t think it would be fruitful at an institutional level and it’s a position that has to be argued for.
Many EAs would also disagree with your assumption that there aren’t any objective moral truths. And many EAs who don’t endorse moral realism would agree that we shouldn’t make the mistake of assuming that all choices are equally valid, and that the only reason anyone makes decisions is due to our personal background.
Without wishing to be too self-congratulatory, when you look at the beings that most EAs consider to be potential moral patients (nonhuman animals including shrimp and insects, potential future people, digital beings), it’s hard to argue that EAs haven’t made more of an effort than most to escape their personal biases.
I agree that the choices we make are in some sense political. But they’re not political in the sense that they involve party or partisan politics.
I disagree. Counter-examples: Sam Bankman-Fried was one of the largest donors to Joe Biden’s presidential campaign. Voting and electoral reform has often been a topic on the EA Forum and appeared on the 80000h podcast. I know several EAs who are or have been actively involved in party politics in Germany. The All-Party Parliamentary Group in the UK says on its website that it “aims to create space for cross-party dialogue”. I would put these people and organizations squarely in the EA space. The choices these people and organizations made directly involve political parties*.
* or their abolition, in the case of some proposed electoral reforms, I believe.
My comment mainly referred to the causes we’ve generally decided to prioritise. When we engage in cause prioritisation decisions, we don’t ask ourselves whether they’re a “leftist” or “rightist” cause area.
I did say that EAs may engage in party politics in an individual or group capacity. But they’re still often doing so in order to advocate for causes that EAs care about, and which people from various standard political ideologies can get on board with. Bankman-Fried also donated to Republican candidates who he thought were good on EA issues, for example. And the name of the “all-party” parliamentary group clearly distinguishes it from just advocating for a standard political ideology or party.
The main point I took from video was that Abigail is kinda asking the question: “How can a movement that wants to change the world be so apolitical?” This is also a criticism I have of many EA structures and people. I even have come across people who view EA and themselves as not political, even as they are arguing for longtermism. The video also highlights this.
When you are quantifying something you don’t become objective all over sudden. You cannot quantify everything, so you have to make a choice on what you want to quantify. And this is a political choice. There is not objective source of truth that tells you that for example quality adjusted life years are the best objective measure. People choose what makes the most sense to them given their background. But you could easily switch it to something else. There is only your subjective choice on what you want to focus. And I would really appreciate if this would be highlighted more in EA.
Right now the vibe is often “We have objectively compared such and such and therefore the obvious choice is this intervention or cause.” But this just frames personal preferences on what is important as an objective truth about the world. Would be great if this subjectivity would be acknowledged more.
And one final point the video also hints at: In EA basically all modern philosophy outside of consequentialism is ignored. Even though much of that philosophy is explicitly developed to crticise pure reason and consequentialism. But if you read EA material you get the impression to only notable philosophers of the 20th century are Peter Singer and Derek Parfit.
I think it’s surprising that EA is so apolitical, but I’m not convinced it’s wrong to make some effort to avoid issues that are politically hot. Three reasons to avoid such things: 1) they’re often not the areas where the most impact can be had, even ignoring constraints imposed by them being hot political topics 2) being hot political topics makes it even harder to make significant progress on these issues and 3) if EAs routinely took strong stands on such things, I’m confident it would lead to significant fragmentation of the community.
EA does take some political stances, although they’re often not on standard hot topics: they’re strongly in favour of animal rights and animal welfare, and were involved in lobbying for a very substantial piece of legislation recently introduced in Europe. Also, a reasonable number of EAs are becoming substantially more “political” on the question of how quickly the frontier of AI capabilities should be advanced.
It seems to me that we are talking about different definitions about what political means. I agree that in some situations it can make sense to not chip in political discussions, to not get pushed to one side. I also see that there are some political issues where EA has taken a stance like animal welfare. However, when I say political I mean what are the reason for us doing things and how do we convince other people of it? In EA there are often arguments that something is not political, because there has been an “objective” calculation of value. However, there is almost never a justification why something was deemed important, even though when you want to change the world in a different way, this is the important part. Or on a more practical level why are QUALYs seen as the best way to measure outcomes in many cases? Using this and not another measure is choice which has to be justified.
Alternatives to QALYs (such as WELLBYs) have been put forward from within the EA movement. But if we’re trying to help others, it seems plausible that we should do it in ways that they care about. Most people care about their quality of life or well-being, as well as the amount of time they’ll have to experience or realise that well-being.
I’m sure there are people who would say they are most effectively helping others by “saving their souls” or promoting their “natural rights”. They’re free to act as they wish. But the reason that EAs (and not just EAs, because QALYs are widely used in health economics and resource allocation) have settled on quality of life and length of life is frankly because they’re the most plausible (or least implausible) ways of measuring the extent to which we’ve helped others.
I’d like to add a thought on the last point:
EA appears to largely ignore the developments of modern and post-modern philosophy, making EA appear like a genuinely new idea/movement. Which it is not. That means that there is a lot to learn from past instances of EA-like movements. EA-like meaning Western rich people trying to do good with Rationality. 20th century philosophy is brimming with very valid critiques of Rationality, but somehow EA seems to jump from Bentham/Mills to Singer/Parfit without batting an eye.
Abigail leaves open how we should do good, whether we want to pursue systemic change or work within the system, or even how we shall define what “good” is. I am sure this is intentionally put at the end of the video. She warns people who consider joining EA to do so with open eyes. I deeply agree with this. If you are thinking about making EA your political movement of choice, be very careful, as with any political movement. EA claims to be open to different moral standpoints, but it most certainly not. There are unchecked power dynamics at play, demographic bias, “thought leaders”, the primacy of Rationality. If I had any advice for anyone in EA, I would recommend they go and spend a year or more learning about all the philosophy that came AFTER utilitarianism*. Otherwise, EA will be lacking context, and could even appear as The Truth. You will be tempted to buy into the opinion of a small number of apparently smart people saying apparently smart things, and by that, hand over your moral decisions to them.
* (for a start, Philosophize This is a nice podcast that deals at length with a lot of these topics)
EA is a movement that aims to use reason and evidence to do the most good, so the centrality of “rationality” (broadly speaking) shouldn’t be too surprising. Many EAs are also deeply familiar with alternatives to utilitarianism. While most (according to the surveys) are utilitarians, some are non-utilitarian consequentialists or pluralists.
I suspect that the movement is dominated by utilitarians and utilitarian-leaning people because while all effective altruists shouldn’t necessarily be utilitarians, all utilitarians should be effective altruists. In contrast, it’s hard to see why a pure deontologist or virtue ethicist should, as a matter of philosophical consistency, be an effective altruist. It’s also difficult to see how a pure deontologist or virtue ethicist could engage in cause prioritisation decisions without ultimately appealing to consequences.
I want to clarify that I do specifically mean philosophical movements like existentialism, structuralism, post-structuralism, the ethics behind communism and fascism—which all were influential in the 20th century. I would also argue that the grouping into consequentialism/virtue ethics/deontology does not capture the perspectives brought up in the aforementioned movements. I would love to see EAs engage with more modern ideas about ethics because they specifically shed light on the flexibility and impermanence of the terms ‘reason’ and ‘evidence’ over the decades.
Sure, you have to choose some model at some point to act, or else you’ll be paralyzed. But I really wish that people who make significant life changes based on reason and evidence take a close look at how these terms are defined within their political movement, and by whom.
I don’t quite see how existentialism, structuralism, post-structuralism and fascism are going to help us be more effectively altruistic, or how they’re going to help us prioritise causes. Communism is a different case as in some formats it’s a potential altruistic cause area that people may choose to prioritise.
I also don’t think that these ideas are more “modern” than utilitarianism, or that their supposed novelty is a point in their favour. Fascism, just to take one of these movements, has been thoroughly discredited and is pretty much the antithesis of altruism. These movements are movements in their own right, and I don’t think they’d want EAs to turn them into something they’re not. The same is true in the opposite direction.
By all means, make an argument in favour of these movements or their relevance to EA. But claiming that EAs haven’t considered these movements (I have, and think they’re false) isn’t likely to change much.
Surely, they are more modern than utilitarianism. Utilitarianism has been developed in the 19th century, while all the other ones mentioned are from the 20th century. And it is not their “novelty” which is interesting, but that they are a direct follow up and criticism of things like utilitarianism. Also, I don’t think that post above was an endorsement of using fascism, but instead a call to understand the idea why people even started with fascism in the first place.
The main contribution of the above mentioned fields of ideas to EA is that they highlight that reason is not a strong tool, as many EA think it is. You can easily bring yourself into bad situation, even if you follow reason all the way. Reason is not something objective, but born from your standpoint in the world and the culture you grow up in.
And if EA (or you) have considered things like existentialism, structuralism, post-structuralism I’d love to see those arguments why it is not important to EA. Never seen anything in this regard.
I think reason is as close to an objective tool as we’re likely to get and often isn’t born from our standpoint in the world or the culture we grow up in. That’s why people from many different cultures have often reached similar conclusions, and why almost everyone (regardless of their background) can recognise logical and mathematical truths. It’s also why most people agree that the sun will rise the next morning and that attempting to leave your house from your upper floor window is a bad idea.
I think the onus is on advocates of these movements to explain their relevance to “doing the most good”. As for the various 20th Century criticisms of utilitarianism, my sense is that they’ve been parried rather successfully by other philosophers. Finally, my point about utilitarianism being just as modern is that it hasn’t in any way been superseded by these other movements — it’s still practiced and used today.
I think it’s fairly unsurprising that EA is mostly consequentialists or utilitarians. But often it goes way beyond that, into very specific niches that are not all a requirement for trying to “do good effectively”.
For example, a disproportionate amount of people here are are capital R “Rationalists”, referring to the subculture built around fans of the “sequences” blogposts on Lesswrong written by Yudkowsky. I think this subgroup in particular suffers from “not invented here” syndrome, where philosophical ideas that haven’t been translated into rationalist jargon are not engaged with seriously.
I think the note on Not Invented Here syndrome is actually amazing and I’m very happy you introduced that concept into this discussion.
“There is not objective source of truth that tells you that for example quality adjusted life years are the best objective measure.”
There’s no objective source of truth telling humans to value what we value; on some level it’s just a brute fact that we have certain values. But given a set of values, some metrics will do better vs. worse at describing the values.
Or in other words: Facts about how much people prefer one thing relative to other things are “subjective” in the weak sense that all psychological facts are subjective: they’re about subjects / minds. But psychology facts aren’t “subjective” in a sense like “there are no facts of the matter about minds”. Minds are just as real a part of the world as chairs, electrons, and zebras.
Consider, for example, a measure that says “a sunburn is 1⁄2 as bad as a migraine” versus one that says “a sunburn is a billion times as bad as a migraine”. We can decompose this into a factual claim about the relative preferences of some group of agents, plus a normative claim that calls the things that group dislikes “bad”.
For practical purposes, the important contribution of welfare metrics isn’t “telling us that the things we dislike are bad”; realists are already happy to run with this, and anti-realists are happy to play along with the basic behavioral take-aways in practice.
Instead, the important contribution is the factual claim about what a group prefers, which is as objective/subjective as any other psych claim. Viewed through that lens, even if neither of the claims above is perfectly accurate, it seems clear that the “1/2 as bad” claim is a lot closer to the psychological truth.
I agree that the choices we make are in some sense political. But they’re not political in the sense that they involve party or partisan politics. Perhaps it would be good for EAs to get involved in that kind of politics (and we sometimes do, usually in an individual capacity), but I personally don’t think it would be fruitful at an institutional level and it’s a position that has to be argued for.
Many EAs would also disagree with your assumption that there aren’t any objective moral truths. And many EAs who don’t endorse moral realism would agree that we shouldn’t make the mistake of assuming that all choices are equally valid, and that the only reason anyone makes decisions is due to our personal background.
Without wishing to be too self-congratulatory, when you look at the beings that most EAs consider to be potential moral patients (nonhuman animals including shrimp and insects, potential future people, digital beings), it’s hard to argue that EAs haven’t made more of an effort than most to escape their personal biases.
I disagree. Counter-examples: Sam Bankman-Fried was one of the largest donors to Joe Biden’s presidential campaign. Voting and electoral reform has often been a topic on the EA Forum and appeared on the 80000h podcast. I know several EAs who are or have been actively involved in party politics in Germany. The All-Party Parliamentary Group in the UK says on its website that it “aims to create space for cross-party dialogue”. I would put these people and organizations squarely in the EA space. The choices these people and organizations made directly involve political parties*.
* or their abolition, in the case of some proposed electoral reforms, I believe.
My comment mainly referred to the causes we’ve generally decided to prioritise. When we engage in cause prioritisation decisions, we don’t ask ourselves whether they’re a “leftist” or “rightist” cause area.
I did say that EAs may engage in party politics in an individual or group capacity. But they’re still often doing so in order to advocate for causes that EAs care about, and which people from various standard political ideologies can get on board with. Bankman-Fried also donated to Republican candidates who he thought were good on EA issues, for example. And the name of the “all-party” parliamentary group clearly distinguishes it from just advocating for a standard political ideology or party.