To clarify, are you asking for a theory of victory around advocating for longtermism (i.e., what is the path to impact for shifting minds around longtermism) or for causes that are currently considered good from a longtermist perspectives?
1. I’m mostly asking for any theories of victory pertaining to causes which support a long-termist vision / end-goal, such as eliminating AI risk.
2. But also interested in a theory of victory / impact on long-termism itself, of which multiple causes interact. For example, if
long-termism goal = reduce all x-risk and develop technology to end suffereing, enable flourishing + colonise stars
then the composites of a theory of victory/ impact could be...:
reduce X risk pertaining to Ai, bio, others
research / udnerstanding around enabling flourishing / reducing suffering
stimulate innovation
think through governance systems to ensure technologies / research above used for the good / not evil
3. Definitely not ‘advocating for longtermism’ as an ends in itself, but I can imagine that advocacy could be part of a wider theory of victory. For example, could postulate that reducing X-risk would require mobilising considerable private / public sector resources, requiring winnning hearts and minds around both how scarily probably X-risk is and the bigger goal of giving our descendants beautiful futures / leaving a legacy.
To clarify, are you asking for a theory of victory around advocating for longtermism (i.e., what is the path to impact for shifting minds around longtermism) or for causes that are currently considered good from a longtermist perspectives?
Three things:
1. I’m mostly asking for any theories of victory pertaining to causes which support a long-termist vision / end-goal, such as eliminating AI risk.
2. But also interested in a theory of victory / impact on long-termism itself, of which multiple causes interact. For example, if
long-termism goal = reduce all x-risk and develop technology to end suffereing, enable flourishing + colonise stars
then the composites of a theory of victory/ impact could be...:
reduce X risk pertaining to Ai, bio, others
research / udnerstanding around enabling flourishing / reducing suffering
stimulate innovation
think through governance systems to ensure technologies / research above used for the good / not evil
3. Definitely not ‘advocating for longtermism’ as an ends in itself, but I can imagine that advocacy could be part of a wider theory of victory. For example, could postulate that reducing X-risk would require mobilising considerable private / public sector resources, requiring winnning hearts and minds around both how scarily probably X-risk is and the bigger goal of giving our descendants beautiful futures / leaving a legacy.