The point about global poverty and longtermism being very different causes is a good one, and the idea of these things being more separate is interesting.
That said, I disagree with the idea that working to prevent existential catastrophe within one’s own lifetime is selfish rather than altruistic. I suppose it’s possible someone could work on x-risk out of purely selfish motivations, but it doesn’t make much sense to me.
From a social perspective, people who work on climate change are considered altruistic even if they are doomy on climate change. People who perform activism on behalf of marginalised groups are considered altruistic even if they’re part of that marginalised group themselves and thus even more clearly acting in their own self-interest.
From a mathematical perspective, consider AI alignment. What are the chances of me making the difference between “world saved” and “world ends” if I go into this field? Let’s call it around one in a million, as a back-of-the-envelope figure. (Assuming AI risk at 10% this century, the AI safety field reducing it by 10%, and my performing 1⁄10,000th of the field’s total output)
This is still sufficient to save 7,000 lives in expected value, so it seems a worthy bet. By contrast, what if, for some reason, misaligned AI would kill me and only me? Well, now I could devote my entire career to AI alignment and only reduce my chance of death by one micromort—by contrast, my Covid vaccine cost me three micromorts all by itself, and 20 minutes of moderate exercise gives a couple of micromorts back. Thus, working on AI alignment is a really dumb idea if I care only about my own life. I would have to go up to at least 1% (10,000x better odds) to even consider doing this for myself.
You make a strong case that trying to convince people to work on existential risks for just their own sakes doesn’t make much sense. But promoting a cause area isn’t just about getting people to work on them but about getting the public and governments and institutions to take them seriously.
For instance, Will MacAskill talks about ideas like scanning the wastewater for new pathogens and using UVC to sterilize airborne pathogens. But he does this only after trying to sell the reader/listener on caring about the potential trillions of future people. I believe this is a very suboptimal approach: most people will support governments and institutions pursuing these, not for the benefits of future people, but because they’re afraid of pathogens and pandemics themselves.
And even when it comes to people who want to work on existential risks, people have a more natural drive to try to save humanity that doesn’t require them to buy the philosophical ideas of longtermism first. That is the drive we should leverage to get more people working on these cause areas. It seems to be working well for the fight against climate change after all.
Right, there is still a collective action/public goods/free rider problem. (But many people reflectively don’t think in these terms and use ‘team reasoning’ … and consider cooperating in the prisoner’s dilemma to be self interested rationality.)
Agree with this, with the caveat that the more selfish framing (“I don’t want to die or my family to die”) seems to be helpfully motivating to some productive AI alignment researchers.
The way I would put it is on reflection, it’s only rational to work on x-risk for altruistic reasons rather than selfish. But if more selfish reasoning helps for day-to-day motivation even if it’s irrational, this seems likely okay (see also Dark Arts of Rationality).
The point about global poverty and longtermism being very different causes is a good one, and the idea of these things being more separate is interesting.
That said, I disagree with the idea that working to prevent existential catastrophe within one’s own lifetime is selfish rather than altruistic. I suppose it’s possible someone could work on x-risk out of purely selfish motivations, but it doesn’t make much sense to me.
From a social perspective, people who work on climate change are considered altruistic even if they are doomy on climate change. People who perform activism on behalf of marginalised groups are considered altruistic even if they’re part of that marginalised group themselves and thus even more clearly acting in their own self-interest.
From a mathematical perspective, consider AI alignment. What are the chances of me making the difference between “world saved” and “world ends” if I go into this field? Let’s call it around one in a million, as a back-of-the-envelope figure. (Assuming AI risk at 10% this century, the AI safety field reducing it by 10%, and my performing 1⁄10,000th of the field’s total output)
This is still sufficient to save 7,000 lives in expected value, so it seems a worthy bet. By contrast, what if, for some reason, misaligned AI would kill me and only me? Well, now I could devote my entire career to AI alignment and only reduce my chance of death by one micromort—by contrast, my Covid vaccine cost me three micromorts all by itself, and 20 minutes of moderate exercise gives a couple of micromorts back. Thus, working on AI alignment is a really dumb idea if I care only about my own life. I would have to go up to at least 1% (10,000x better odds) to even consider doing this for myself.
You make a strong case that trying to convince people to work on existential risks for just their own sakes doesn’t make much sense. But promoting a cause area isn’t just about getting people to work on them but about getting the public and governments and institutions to take them seriously.
For instance, Will MacAskill talks about ideas like scanning the wastewater for new pathogens and using UVC to sterilize airborne pathogens. But he does this only after trying to sell the reader/listener on caring about the potential trillions of future people. I believe this is a very suboptimal approach: most people will support governments and institutions pursuing these, not for the benefits of future people, but because they’re afraid of pathogens and pandemics themselves.
And even when it comes to people who want to work on existential risks, people have a more natural drive to try to save humanity that doesn’t require them to buy the philosophical ideas of longtermism first. That is the drive we should leverage to get more people working on these cause areas. It seems to be working well for the fight against climate change after all.
Right, there is still a collective action/public goods/free rider problem. (But many people reflectively don’t think in these terms and use ‘team reasoning’ … and consider cooperating in the prisoner’s dilemma to be self interested rationality.)
Agree with this, with the caveat that the more selfish framing (“I don’t want to die or my family to die”) seems to be helpfully motivating to some productive AI alignment researchers.
The way I would put it is on reflection, it’s only rational to work on x-risk for altruistic reasons rather than selfish. But if more selfish reasoning helps for day-to-day motivation even if it’s irrational, this seems likely okay (see also Dark Arts of Rationality).