EA Colorado recently hosted our first Longtermism Discussion Group. It was 4 weeks and we used this curriculum, which was directly based off this curriculum with the addition of some guiding questions that helped focus our attention during the readings. (We actually split into a Monday & Wednesday group based on numbers and these insights only reflect the Wednesday group which I organized and was a part of).
Here are some of the insights I am taking away from it. I spent ~2 hours writing this and got input from 2 other participants before publishing it.
The Importance of Longtermism is Clear: We care. A lot. The arguments for longtermism make sense—there’s a huge possibility there for unimaginable creativity, beauty, and everything else that comes from living life on Earth.
The Future is Wildly Uncertain (& becoming more so): The future, even 50-100 years out, has a TON of uncertainty. With the speed of change accelerating and more and more people becoming more and more ‘agentful’, there’s an increasing number and scale of unknowns impacting the future. We noticed an ‘overwhelm’ feeling when we considered all of that.
We Hold a Preference for Certainty: Our group, and likely humans in general, have a natural bias to work on/think about things with more certainty. We noticed this in a few different ways…
Our group wanted clarity on which x-risks were most likely to impact humanity and how, and also which ones had an ability to be controlled, prevented entirely or reduced such that we could feel confident allocating our collective attention and/or more resources towards that risk.
We felt more comfortable exploring x-risks that we as individuals had some existing knowledge of—this feels natural and worth noting.
We felt most compelled to imagine which ways we might participate in longtermism by using our existing skills (things we can do with higher ‘certainty’ or confidence) rather than assessing a broad range of things we could learn that could contribute to longterm causes. I sense we had a natural desire to remove certain degrees of uncertainty to not get caught in total overwhelm due to so many moving variables. This may mean we need more guidance or encouragement around how to explore less obvious ways we can contribute.
Varying Level of Imagined Influence on X-Risks: There are (seemingly) quite drastically different levels of influence ‘we’ might hope to have on various x-risks.
Generally those caused by humans feel more influenceable by humans… until they absolutely aren’t because humans are quite agentic and there are so many of us. As certain technologies become more widely distributed—i.e. nuclear weapons, AI that allows certain malplayers to hack core societal systems is the risk of different x-risks drastically changing? Is anyone attempting to track that?
We didn’t find any tables or reference sources on the ‘ability to reduce’ or ‘ability to prevent’ varying x-risks but I sense that would be valuable though quite challenging to create… maybe it’s a summary of known challenges and a rating of their difficulty for each category of x-risks? Perhaps we design some type of collective forecasting challenge for this?
We also didn’t find any ‘official’ lists of x-risks so we defaulted to creating a vastly incomplete one initially based off the list in The Precipice. Someone did share this spreadsheet created by Michael Aird which I found more nuanced and useful.
Preparing for Plan B: There are very few organizations genuinely preparing for what happens after a civilization collapse or global catastrophic risk. ALLFED and a few other emerging groups like this one seem to be taking the lead on this to some extent but as individuals and smaller entities it seems wise to also be prepared on some level. We wondered if the ‘prepper’ community is intertwined with EA. I don’t know it to be, but it sort of makes sense?! Some nations and international organizations have some form of a ‘disaster preparedness plan’ and there’s likely much more we can do to bolster our individual and collective preparedness for responding to catastrophes.
Low Hanging Fruit? It’s unclear how many small, medium, and large scale ‘low hanging fruits’ there are. We really only scratched the surface in imagining possible impactful, do-able actions we might take with either the resources we already have access to (social networks, existing skills, dollars in our personal accounts) or by aligning more resources through creative collaboration (applying for funds, networking strategically, etc).
Developing a database of identified promising lines of actions for various x-risks and cause areas feels particularly valuable. Maybe it would even include the imagined skills/team needed to execute on the idea such that folks could more easily self organize around them. Which brings me to...
Human Collaboration Potential: Personally, I think one of the most rapidly changing (and malleable) dynamics at play in terms of affecting x-risks is our ability to collaborate. Over the last 5 years I have personally noticed an dramatically expanded ability to collaborate with values-aligned people from around the world. Technology continues to make this easier and easier—and I’m getting better at it. I see this playing out in positive & negative ways. Depending on who gets better fastest at large scale collaboration/harnessing emerging technologies will have more and more influence on the world. There’s some upper bound in terms of trust, complexity, and human cognition potential but I don’t think most people operate close to it (yet). I would love to see EA put more resources towards developing our ability to collaborate well by exploring concepts related to non-naive trust, group coherence, values alignment exercises, co-visioning across cause areas, etc.
Further Discussion Groups: If I were to run this again I would encourage our group to spend ~20-50% researching existing solutions or identifying gaps that could moderately easily be filled in place of time spent piling on feelings of overwhelm. I would do this because I think there’s value to creative brainstorming ideas and then talking about those ideas with people. I also think it’s more fun and motivating to feel (some hope of being) agentful rather than concluding something like: “Gosh there’s a ton of fail points looming over us and none of them seem very tractable and now I need to take a nap/eat my feeling/’push harder at the semi-meaningful thing I’m doing’.”
Someone introduced me to the concept of the ‘tragic gap’ as a cognitive representation between ‘the world we desire’, ‘the world as it is’, and our ability to bring them closer together. When that gap distance feels too big, we tend towards shut down mode or burn out, which I have seen take a serious toll on folks’ mental health and lead to unsustainable personal effort. As an organizer, I see it as partially my responsibility to care for the psychological repercussions of deepening one’s detailed awareness of existential risks without expanding one’s ability to influence them.
8 Insights From Our Longtermism Discussion Group
EA Colorado recently hosted our first Longtermism Discussion Group. It was 4 weeks and we used this curriculum, which was directly based off this curriculum with the addition of some guiding questions that helped focus our attention during the readings. (We actually split into a Monday & Wednesday group based on numbers and these insights only reflect the Wednesday group which I organized and was a part of).
Here are some of the insights I am taking away from it. I spent ~2 hours writing this and got input from 2 other participants before publishing it.
The Importance of Longtermism is Clear: We care. A lot. The arguments for longtermism make sense—there’s a huge possibility there for unimaginable creativity, beauty, and everything else that comes from living life on Earth.
The Future is Wildly Uncertain (& becoming more so): The future, even 50-100 years out, has a TON of uncertainty. With the speed of change accelerating and more and more people becoming more and more ‘agentful’, there’s an increasing number and scale of unknowns impacting the future. We noticed an ‘overwhelm’ feeling when we considered all of that.
We Hold a Preference for Certainty: Our group, and likely humans in general, have a natural bias to work on/think about things with more certainty. We noticed this in a few different ways…
Our group wanted clarity on which x-risks were most likely to impact humanity and how, and also which ones had an ability to be controlled, prevented entirely or reduced such that we could feel confident allocating our collective attention and/or more resources towards that risk.
We felt more comfortable exploring x-risks that we as individuals had some existing knowledge of—this feels natural and worth noting.
We felt most compelled to imagine which ways we might participate in longtermism by using our existing skills (things we can do with higher ‘certainty’ or confidence) rather than assessing a broad range of things we could learn that could contribute to longterm causes. I sense we had a natural desire to remove certain degrees of uncertainty to not get caught in total overwhelm due to so many moving variables. This may mean we need more guidance or encouragement around how to explore less obvious ways we can contribute.
Varying Level of Imagined Influence on X-Risks: There are (seemingly) quite drastically different levels of influence ‘we’ might hope to have on various x-risks.
Generally those caused by humans feel more influenceable by humans… until they absolutely aren’t because humans are quite agentic and there are so many of us. As certain technologies become more widely distributed—i.e. nuclear weapons, AI that allows certain malplayers to hack core societal systems is the risk of different x-risks drastically changing? Is anyone attempting to track that?
We didn’t find any tables or reference sources on the ‘ability to reduce’ or ‘ability to prevent’ varying x-risks but I sense that would be valuable though quite challenging to create… maybe it’s a summary of known challenges and a rating of their difficulty for each category of x-risks? Perhaps we design some type of collective forecasting challenge for this?
We also didn’t find any ‘official’ lists of x-risks so we defaulted to creating a vastly incomplete one initially based off the list in The Precipice. Someone did share this spreadsheet created by Michael Aird which I found more nuanced and useful.
Preparing for Plan B: There are very few organizations genuinely preparing for what happens after a civilization collapse or global catastrophic risk. ALLFED and a few other emerging groups like this one seem to be taking the lead on this to some extent but as individuals and smaller entities it seems wise to also be prepared on some level. We wondered if the ‘prepper’ community is intertwined with EA. I don’t know it to be, but it sort of makes sense?! Some nations and international organizations have some form of a ‘disaster preparedness plan’ and there’s likely much more we can do to bolster our individual and collective preparedness for responding to catastrophes.
Low Hanging Fruit? It’s unclear how many small, medium, and large scale ‘low hanging fruits’ there are. We really only scratched the surface in imagining possible impactful, do-able actions we might take with either the resources we already have access to (social networks, existing skills, dollars in our personal accounts) or by aligning more resources through creative collaboration (applying for funds, networking strategically, etc).
Developing a database of identified promising lines of actions for various x-risks and cause areas feels particularly valuable. Maybe it would even include the imagined skills/team needed to execute on the idea such that folks could more easily self organize around them. Which brings me to...
Human Collaboration Potential: Personally, I think one of the most rapidly changing (and malleable) dynamics at play in terms of affecting x-risks is our ability to collaborate. Over the last 5 years I have personally noticed an dramatically expanded ability to collaborate with values-aligned people from around the world. Technology continues to make this easier and easier—and I’m getting better at it. I see this playing out in positive & negative ways. Depending on who gets better fastest at large scale collaboration/harnessing emerging technologies will have more and more influence on the world. There’s some upper bound in terms of trust, complexity, and human cognition potential but I don’t think most people operate close to it (yet). I would love to see EA put more resources towards developing our ability to collaborate well by exploring concepts related to non-naive trust, group coherence, values alignment exercises, co-visioning across cause areas, etc.
Further Discussion Groups: If I were to run this again I would encourage our group to spend ~20-50% researching existing solutions or identifying gaps that could moderately easily be filled in place of time spent piling on feelings of overwhelm. I would do this because I think there’s value to creative brainstorming ideas and then talking about those ideas with people. I also think it’s more fun and motivating to feel (some hope of being) agentful rather than concluding something like: “Gosh there’s a ton of fail points looming over us and none of them seem very tractable and now I need to take a nap/eat my feeling/’push harder at the semi-meaningful thing I’m doing’.”
Someone introduced me to the concept of the ‘tragic gap’ as a cognitive representation between ‘the world we desire’, ‘the world as it is’, and our ability to bring them closer together. When that gap distance feels too big, we tend towards shut down mode or burn out, which I have seen take a serious toll on folks’ mental health and lead to unsustainable personal effort. As an organizer, I see it as partially my responsibility to care for the psychological repercussions of deepening one’s detailed awareness of existential risks without expanding one’s ability to influence them.