The main point I took from video was that Abigail is kinda asking the question: “How can a movement that wants to change the world be so apolitical?” This is also a criticism I have of many EA structures and people.
I think it’s surprising that EA is so apolitical, but I’m not convinced it’s wrong to make some effort to avoid issues that are politically hot. Three reasons to avoid such things: 1) they’re often not the areas where the most impact can be had, even ignoring constraints imposed by them being hot political topics 2) being hot political topics makes it even harder to make significant progress on these issues and 3) if EAs routinely took strong stands on such things, I’m confident it would lead to significant fragmentation of the community.
EA does take some political stances, although they’re often not on standard hot topics: they’re strongly in favour of animal rights and animal welfare, and were involved in lobbying for a very substantial piece of legislation recently introduced in Europe. Also, a reasonable number of EAs are becoming substantially more “political” on the question of how quickly the frontier of AI capabilities should be advanced.
It seems to me that we are talking about different definitions about what political means. I agree that in some situations it can make sense to not chip in political discussions, to not get pushed to one side. I also see that there are some political issues where EA has taken a stance like animal welfare. However, when I say political I mean what are the reason for us doing things and how do we convince other people of it? In EA there are often arguments that something is not political, because there has been an “objective” calculation of value. However, there is almost never a justification why something was deemed important, even though when you want to change the world in a different way, this is the important part. Or on a more practical level why are QUALYs seen as the best way to measure outcomes in many cases? Using this and not another measure is choice which has to be justified.
Alternatives to QALYs (such as WELLBYs) have been put forward from within the EA movement. But if we’re trying to help others, it seems plausible that we should do it in ways that they care about. Most people care about their quality of life or well-being, as well as the amount of time they’ll have to experience or realise that well-being.
I’m sure there are people who would say they are most effectively helping others by “saving their souls” or promoting their “natural rights”. They’re free to act as they wish. But the reason that EAs (and not just EAs, because QALYs are widely used in health economics and resource allocation) have settled on quality of life and length of life is frankly because they’re the most plausible (or least implausible) ways of measuring the extent to which we’ve helped others.
I think it’s surprising that EA is so apolitical, but I’m not convinced it’s wrong to make some effort to avoid issues that are politically hot. Three reasons to avoid such things: 1) they’re often not the areas where the most impact can be had, even ignoring constraints imposed by them being hot political topics 2) being hot political topics makes it even harder to make significant progress on these issues and 3) if EAs routinely took strong stands on such things, I’m confident it would lead to significant fragmentation of the community.
EA does take some political stances, although they’re often not on standard hot topics: they’re strongly in favour of animal rights and animal welfare, and were involved in lobbying for a very substantial piece of legislation recently introduced in Europe. Also, a reasonable number of EAs are becoming substantially more “political” on the question of how quickly the frontier of AI capabilities should be advanced.
It seems to me that we are talking about different definitions about what political means. I agree that in some situations it can make sense to not chip in political discussions, to not get pushed to one side. I also see that there are some political issues where EA has taken a stance like animal welfare. However, when I say political I mean what are the reason for us doing things and how do we convince other people of it? In EA there are often arguments that something is not political, because there has been an “objective” calculation of value. However, there is almost never a justification why something was deemed important, even though when you want to change the world in a different way, this is the important part. Or on a more practical level why are QUALYs seen as the best way to measure outcomes in many cases? Using this and not another measure is choice which has to be justified.
Alternatives to QALYs (such as WELLBYs) have been put forward from within the EA movement. But if we’re trying to help others, it seems plausible that we should do it in ways that they care about. Most people care about their quality of life or well-being, as well as the amount of time they’ll have to experience or realise that well-being.
I’m sure there are people who would say they are most effectively helping others by “saving their souls” or promoting their “natural rights”. They’re free to act as they wish. But the reason that EAs (and not just EAs, because QALYs are widely used in health economics and resource allocation) have settled on quality of life and length of life is frankly because they’re the most plausible (or least implausible) ways of measuring the extent to which we’ve helped others.