Executive summary: The distribution of moral value follows a power law, meaning that a tiny fraction of possible futures capture the vast majority of value; if humanity’s motivations shape the long-term future, most value could be lost due to misalignment between what matters most and what people value.
Key points:
Moral value follows a power law—a few outcomes are vastly more valuable than others, meaning that even minor differences in future trajectories could lead to enormous moral divergence.
Human motivations may fail to capture most value—if the long-term future is shaped by human preferences rather than an ideal moral trajectory, only a tiny fraction of possible value may be realized.
The problem worsens with greater option space—as technology advances, the variety of possible futures expands, increasing the likelihood that human decisions will diverge from the most valuable outcomes.
Metaethical challenges complicate the picture—moral realism does not guarantee convergence on high-value futures, and moral antirealism allows for persistent misalignment between human preferences and optimal outcomes.
There are ethical views that weaken the power law effect—some theories, such as diminishing returns in value or deep incommensurability, suggest that the difference between possible futures is not as extreme.
Trade and cooperation could mitigate value loss—if future actors engage in ideal resource allocation and bargaining, different moral perspectives might preserve large portions of what each values, counteracting the power law effect to some extent.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: The distribution of moral value follows a power law, meaning that a tiny fraction of possible futures capture the vast majority of value; if humanity’s motivations shape the long-term future, most value could be lost due to misalignment between what matters most and what people value.
Key points:
Moral value follows a power law—a few outcomes are vastly more valuable than others, meaning that even minor differences in future trajectories could lead to enormous moral divergence.
Human motivations may fail to capture most value—if the long-term future is shaped by human preferences rather than an ideal moral trajectory, only a tiny fraction of possible value may be realized.
The problem worsens with greater option space—as technology advances, the variety of possible futures expands, increasing the likelihood that human decisions will diverge from the most valuable outcomes.
Metaethical challenges complicate the picture—moral realism does not guarantee convergence on high-value futures, and moral antirealism allows for persistent misalignment between human preferences and optimal outcomes.
There are ethical views that weaken the power law effect—some theories, such as diminishing returns in value or deep incommensurability, suggest that the difference between possible futures is not as extreme.
Trade and cooperation could mitigate value loss—if future actors engage in ideal resource allocation and bargaining, different moral perspectives might preserve large portions of what each values, counteracting the power law effect to some extent.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.