Hi there :) I very much sense that a conversation with me last weekend at EAGxVirtual is causally connected to this post, so I thought I’d share some quick thoughts!
First, I apologize if our conversation led you to feel more uncertain about your career in a way that negatively affected your well-being. I know how subjectively “annoying” it can be to question your priorities.
Then, I think your post raises three different potential problems with reducing x-risks (the three of which I know we’ve talked about) worth disentangling:
1. You mention suffering-focused ethics and reasons to believe these advise against x-risk reduction.
2. You also mention the problem of cluelessness, which I think is worth dissociating. I think motivations for cluelessness vis-a-vis the sign of x-risk reduction are very much orthogonal to suffering-focused ethics. I don’t think someone who rejects suffering-focused ethics should be less clueless. In fact, one can argue that they should be more agnostic about this while those endorsing suffering-focused ethics might have good reasons to at least weakly believe x-risk reduction hurts their values, for the “more beings → more suffering” reason you mention. (I’m however quite uncertain about this and sympathetic to the idea that those endorsing suffering-focused ethics should maybe be just as clueless.)
3. Finally, objections to the ‘time of perils’ hypothesis can also be reasons to doubt the value of x-risk reduction (Thorstad 2023), but for very different reasons. It’s purely a question of what is the most “impactable” between x-risks (and maybe other longterm causes) and shorter-term causes, rather than a question of whether x-risk reduction does more good than harm to begin with (like with 1 and 2).
Discussions regarding the questions raised by these three points seem healthy, indeed.
Hi there :) I very much sense that a conversation with me last weekend at EAGxVirtual is causally connected to this post, so I thought I’d share some quick thoughts!
First, I apologize if our conversation led you to feel more uncertain about your career in a way that negatively affected your well-being. I know how subjectively “annoying” it can be to question your priorities.
Then, I think your post raises three different potential problems with reducing x-risks (the three of which I know we’ve talked about) worth disentangling:
1. You mention suffering-focused ethics and reasons to believe these advise against x-risk reduction.
2. You also mention the problem of cluelessness, which I think is worth dissociating. I think motivations for cluelessness vis-a-vis the sign of x-risk reduction are very much orthogonal to suffering-focused ethics. I don’t think someone who rejects suffering-focused ethics should be less clueless. In fact, one can argue that they should be more agnostic about this while those endorsing suffering-focused ethics might have good reasons to at least weakly believe x-risk reduction hurts their values, for the “more beings → more suffering” reason you mention. (I’m however quite uncertain about this and sympathetic to the idea that those endorsing suffering-focused ethics should maybe be just as clueless.)
3. Finally, objections to the ‘time of perils’ hypothesis can also be reasons to doubt the value of x-risk reduction (Thorstad 2023), but for very different reasons. It’s purely a question of what is the most “impactable” between x-risks (and maybe other longterm causes) and shorter-term causes, rather than a question of whether x-risk reduction does more good than harm to begin with (like with 1 and 2).
Discussions regarding the questions raised by these three points seem healthy, indeed.