Executive summary: Optimistic longtermism relies on decisive but potentially unreliable judgment calls, and these may be better explained by evolutionary biases—such as pressures toward pro-natalism—than by truth-tracking reasoning, which opens it up to an evolutionary debunking argument.
Key points:
Optimistic longtermism depends on high-stakes, subjective judgment calls about whether reducing existential risk improves the long-term future, despite pervasive epistemic uncertainty.
These judgment calls cannot be fully justified by argument and may differ even among rational, informed experts, making their reliability questionable.
The post introduces the idea that such intuitions may stem from evolutionary pressures—particularly pro-natalist ones—rather than from reliable truth-tracking processes.
This constitutes an evolutionary debunking argument: if our intuitions are shaped by fitness-maximizing pressures rather than truth-seeking ones, their epistemic authority is undermined.
The author emphasizes this critique does not support pessimistic longtermism but may justify agnosticism about the long-term value of X-risk reduction.
While the argument is theoretically significant, the author doubts its practical effectiveness and suggests more fruitful strategies may involve presenting new crucial considerations to longtermists.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: Optimistic longtermism relies on decisive but potentially unreliable judgment calls, and these may be better explained by evolutionary biases—such as pressures toward pro-natalism—than by truth-tracking reasoning, which opens it up to an evolutionary debunking argument.
Key points:
Optimistic longtermism depends on high-stakes, subjective judgment calls about whether reducing existential risk improves the long-term future, despite pervasive epistemic uncertainty.
These judgment calls cannot be fully justified by argument and may differ even among rational, informed experts, making their reliability questionable.
The post introduces the idea that such intuitions may stem from evolutionary pressures—particularly pro-natalist ones—rather than from reliable truth-tracking processes.
This constitutes an evolutionary debunking argument: if our intuitions are shaped by fitness-maximizing pressures rather than truth-seeking ones, their epistemic authority is undermined.
The author emphasizes this critique does not support pessimistic longtermism but may justify agnosticism about the long-term value of X-risk reduction.
While the argument is theoretically significant, the author doubts its practical effectiveness and suggests more fruitful strategies may involve presenting new crucial considerations to longtermists.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.