I found this very interesting to read, not because I agree with everything that was said (some bits I do, some bits I don’t) but because I think someone should be saying it.
I have had thoughts about the seeming divide between ‘longtermism’ vs ‘shorttermism’, when to me there seems to be a large overlap between the two, which kind of goes in line with what you mentioned: x-risk can occur within this lifetime, therefore you do not need to be convinced about longtermism to care about it. Even if future lives had no value (I’m not making that argument), and you only take into account current lives, preventing x-risk has a massive value because that is still nearly 8 billion people we are talking about!
Therefore I like the point that x-risk does not need to be altruistic. But it also very much can be, therefore having it separate from effective altruism is not needed: 1. I want to save as many people as possible therefore stopping extinction is good—altruism, and it is a good idea to do this as effectively as possible 2. I want to save myself from extinction—not altruism, but can lead to the same end therefore there can be a high benefit of promoting it this way to more general audiences
So I do not think that ‘effective altruism’ is the wrong name, as I think everyone so far I’ve come across in EA has been in the first category. EA is also broader, including things like animal welfare and global health and poverty and various other areas to improve the lives on sentient beings (and I think this should be talked about more). I think EA is a good name for all of those things.
But if the goal is to reduce x-risk in any way possible, working with people who fall into the second category, who want to save themselves and their loved ones, is good. If we want large shifts in global policy and people generally to act a certain way, things need to be communicated to a general audience, and people should be encouraged to work on high impact things even if they are ‘not aligned’.
I found this very interesting to read, not because I agree with everything that was said (some bits I do, some bits I don’t) but because I think someone should be saying it.
I have had thoughts about the seeming divide between ‘longtermism’ vs ‘shorttermism’, when to me there seems to be a large overlap between the two, which kind of goes in line with what you mentioned: x-risk can occur within this lifetime, therefore you do not need to be convinced about longtermism to care about it. Even if future lives had no value (I’m not making that argument), and you only take into account current lives, preventing x-risk has a massive value because that is still nearly 8 billion people we are talking about!
Therefore I like the point that x-risk does not need to be altruistic. But it also very much can be, therefore having it separate from effective altruism is not needed:
1. I want to save as many people as possible therefore stopping extinction is good—altruism, and it is a good idea to do this as effectively as possible
2. I want to save myself from extinction—not altruism, but can lead to the same end therefore there can be a high benefit of promoting it this way to more general audiences
So I do not think that ‘effective altruism’ is the wrong name, as I think everyone so far I’ve come across in EA has been in the first category. EA is also broader, including things like animal welfare and global health and poverty and various other areas to improve the lives on sentient beings (and I think this should be talked about more). I think EA is a good name for all of those things.
But if the goal is to reduce x-risk in any way possible, working with people who fall into the second category, who want to save themselves and their loved ones, is good. If we want large shifts in global policy and people generally to act a certain way, things need to be communicated to a general audience, and people should be encouraged to work on high impact things even if they are ‘not aligned’.