This post helped clarify to me which causes ought to be prioritized from a longtermist standpoint. Although we donāt know the long-term consequences of our actions (and hence are clueless), we can take steps to reduce our uncertainties and reliably do good over the long term. These include:
Global priorities researchāto improve our understanding of whatās morally important and of the risks and opportunities facing humanity.
Improving institutional decision-making at the global levelāto improve humanityās āability to mediate diverse preferences, decide on collectively held goals, and work together towards those goals.ā
Improving foresightāāThe further we can see [into the future], the more information we can incorporate into our decision-making, which in turn leads to higher quality outcomes with fewer surprises.ā
Reducing existential risksāāAvoiding extinction and ālock-inā of suboptimal states is necessary for realizing the full potential benefit of the future.ā
Although we donāt necessarily know where humanity will end up in the very long term, these interventions help us increase our steering capacityāhumanityās ability to navigate risks and opportunities along the way.
I recommend this post to anyone interested in longtermism, as itās one of the few systematic attempts at longtermist cause prioritization that Iāve seen. There are things Iād add: Perhaps economic growth would augment humanityās steering capacity by increasing the amount of resources available to us to avoid risks and pursue opportunities (see also āExistential Risk and Growthā). And perhaps promoting effective altruism to a culturally and intellectually diverse audience would help us make more robust decisions through exposure to more ideas on what matters and how to do good.
Going forward, Iād like to see more systematic attempts at cause prioritization from a longtermist perspective, perhaps building on this post. 80,000 Hoursā list of problem profiles currently includes 17 problems that they claim might be as pressing as their current priority problems (artificial intelligence, biosecurity, etc.). Iād like to see more research clarifying and evaluating these problems, and drawing quantitative comparisons between them.
This post helped clarify to me which causes ought to be prioritized from a longtermist standpoint. Although we donāt know the long-term consequences of our actions (and hence are clueless), we can take steps to reduce our uncertainties and reliably do good over the long term. These include:
Global priorities researchāto improve our understanding of whatās morally important and of the risks and opportunities facing humanity.
Improving institutional decision-making at the global levelāto improve humanityās āability to mediate diverse preferences, decide on collectively held goals, and work together towards those goals.ā
Improving foresightāāThe further we can see [into the future], the more information we can incorporate into our decision-making, which in turn leads to higher quality outcomes with fewer surprises.ā
Reducing existential risksāāAvoiding extinction and ālock-inā of suboptimal states is necessary for realizing the full potential benefit of the future.ā
āIncreasing the number of well-intentioned, highly capable people,ā including through building effective altruism and training people to think more clearly.
Although we donāt necessarily know where humanity will end up in the very long term, these interventions help us increase our steering capacityāhumanityās ability to navigate risks and opportunities along the way.
I recommend this post to anyone interested in longtermism, as itās one of the few systematic attempts at longtermist cause prioritization that Iāve seen. There are things Iād add: Perhaps economic growth would augment humanityās steering capacity by increasing the amount of resources available to us to avoid risks and pursue opportunities (see also āExistential Risk and Growthā). And perhaps promoting effective altruism to a culturally and intellectually diverse audience would help us make more robust decisions through exposure to more ideas on what matters and how to do good.
Iām heartened to have seen progress in the areas identified in this post. For example, the Effective Institutions Project was created in 2020 to work systematically on IIDM. Also, Iāve seen posts calling attention to the inadequacy of existing cause prioritization research.
Going forward, Iād like to see more systematic attempts at cause prioritization from a longtermist perspective, perhaps building on this post. 80,000 Hoursā list of problem profiles currently includes 17 problems that they claim might be as pressing as their current priority problems (artificial intelligence, biosecurity, etc.). Iād like to see more research clarifying and evaluating these problems, and drawing quantitative comparisons between them.