Executive summary: This exploratory and reflective post grapples with the tension between two clusters of values—dynamism and pragmatic power versus humility and virtuous restraint—arguing that while both are important, recent experiences and thinking have pushed the author to emphasize virtues like humility, cooperation, and pluralism, especially in navigating transformative technologies like AI, where locking in current preferences risks undermining long-term flourishing.
Key points:
Two value clusters are in tension: One emphasizes decisiveness, tradeoffs, and real-world impact (“rolling up sleeves”), while the other emphasizes humility, epistemic rigor, and wariness of power’s corrupting effects. The author has shifted more toward the latter, especially in the context of AI.
Power-seeking, even with good intentions, often warps judgment: Observations of altruistically motivated actors failing to use power wisely have increased the author’s skepticism of centralizing influence.
Virtue ethics as a delegation strategy: Virtue can be seen as a way of shaping future selves or agents, and focusing on internal character might prevent failures that arise from short-term, pragmatic consequentialism.
Dynamism versus stasis in AI governance: Drawing on thinkers like Helen Toner and Joe Carlsmith, the post warns that preventing catastrophic AI risks via top-down control could stifle experimentation, freedom, and the possibility of decentralized progress.
The importance of preserving “kernels” for future governance: Rather than locking in decisions now, we should aim to pass on values, tools, and structures that future, wiser generations can use to navigate challenges more effectively.
Wisdom longtermism over welfare longtermism: The author favors a focus on building toward a wiser, more empowered civilization—one that can better solve deep future challenges—rather than optimizing directly for current conceptions of welfare.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: This exploratory and reflective post grapples with the tension between two clusters of values—dynamism and pragmatic power versus humility and virtuous restraint—arguing that while both are important, recent experiences and thinking have pushed the author to emphasize virtues like humility, cooperation, and pluralism, especially in navigating transformative technologies like AI, where locking in current preferences risks undermining long-term flourishing.
Key points:
Two value clusters are in tension: One emphasizes decisiveness, tradeoffs, and real-world impact (“rolling up sleeves”), while the other emphasizes humility, epistemic rigor, and wariness of power’s corrupting effects. The author has shifted more toward the latter, especially in the context of AI.
Power-seeking, even with good intentions, often warps judgment: Observations of altruistically motivated actors failing to use power wisely have increased the author’s skepticism of centralizing influence.
Virtue ethics as a delegation strategy: Virtue can be seen as a way of shaping future selves or agents, and focusing on internal character might prevent failures that arise from short-term, pragmatic consequentialism.
Dynamism versus stasis in AI governance: Drawing on thinkers like Helen Toner and Joe Carlsmith, the post warns that preventing catastrophic AI risks via top-down control could stifle experimentation, freedom, and the possibility of decentralized progress.
The importance of preserving “kernels” for future governance: Rather than locking in decisions now, we should aim to pass on values, tools, and structures that future, wiser generations can use to navigate challenges more effectively.
Wisdom longtermism over welfare longtermism: The author favors a focus on building toward a wiser, more empowered civilization—one that can better solve deep future challenges—rather than optimizing directly for current conceptions of welfare.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.