There is a misunderstanding: “Increasing value of futures where we survive” is an X-risks reduction intervention.See the comment by MacAskill https://forum.effectivealtruism.org/posts/TeBBvwQH7KFwLT7w5/william_macaskill-s-shortform?commentId=jbyvG8sHfeZzMqusJ which clarifies that the debate is between Extinction-Risks vs Alignment-Risks (AKA increasing the value of future) which both are X-risks. The debate is not between X-risks and Alignment-Risks.One of the most impactful way to “increasing value of futures where we survive” is to work on AI governance and technical AI alignment.
There is a misunderstanding: “Increasing value of futures where we survive” is an X-risks reduction intervention.
See the comment by MacAskill https://forum.effectivealtruism.org/posts/TeBBvwQH7KFwLT7w5/william_macaskill-s-shortform?commentId=jbyvG8sHfeZzMqusJ which clarifies that the debate is between Extinction-Risks vs Alignment-Risks (AKA increasing the value of future) which both are X-risks. The debate is not between X-risks and Alignment-Risks.
One of the most impactful way to “increasing value of futures where we survive” is to work on AI governance and technical AI alignment.