What EAG sessions would you like on Global Catastrophic Risks?
What is the topic of the session?
Who would you like to give the session?
What is the format of the talk?
Why is it important?
- 24 Mar 2022 23:45 UTC; 2 points) 's comment on What EAG sessions would you like on AI? by (
- 24 Mar 2022 23:44 UTC; 2 points) 's comment on What EAG sessions would you like on Global Catastrophic Risks? by (
- 24 Mar 2022 23:43 UTC; 2 points) 's comment on What EAG sessions would you like to see on Global Priorities Research? by (
- 24 Mar 2022 23:42 UTC; 2 points) 's comment on What EAG sessions would you like to see on meta-EA? by (
- 24 Mar 2022 23:44 UTC; 2 points) 's comment on What EAG sessions would you like on epistemics? by (
- 24 Mar 2022 23:45 UTC; 2 points) 's comment on What EAG sessions would you like on animal welfare? by (
- 24 Mar 2022 23:45 UTC; 2 points) 's comment on What EAG sessions would you like to see on global health and wellbeing? by (
- 24 Mar 2022 23:43 UTC; 2 points) 's comment on What EAG sessions would you like to see on global health and wellbeing? by (
Here are all the questions in this series:
https://forum.effectivealtruism.org/posts/WQTEuxkXyCy9QFJCb/what-eag-sessions-would-you-like-to-see-on-meta-ea
https://forum.effectivealtruism.org/posts/Nq3bbFjKjs5jPZnou/what-eag-sessions-would-you-like-to-see-on-global-priorities
https://forum.effectivealtruism.org/posts/iQWfeoXFebrEBh4qq/what-eag-sessions-would-you-like-to-see-on-horizon-scanning
https://forum.effectivealtruism.org/posts/AKBong8tuK65MWGjd/what-eag-sessions-would-you-like-on-global-catastrophic
https://forum.effectivealtruism.org/posts/AfmoMv8ixtFGhemnH/what-eag-sessions-would-you-like-to-attend-on-biorisk
https://forum.effectivealtruism.org/posts/6Qw3JvEDkAzmaqpTK/what-eag-sessions-would-you-like-on-ai
https://forum.effectivealtruism.org/posts/rpNwa94ep3jEyFSDB/what-eag-sessions-would-you-like-on-epistemics
https://forum.effectivealtruism.org/posts/wAJ4tLbTuhaoYN7Py/what-eag-sessions-would-you-like-on-animal-welfare
A Ranking of GCRs—Luiza Rodriguez and Robert Wiblin—Longform Talk
A quantitative ranking of GCRs in terms of Importance Tractability Neglectedness, taking into account how unlikely they are to actually kill all humans. This is important because Luiza’s work on whether humanity will survive seems underrated in terms of how it affects our rankings of different GCRs.
I already posted this in the post about EAG sessions about AI, but I’m reposting it since I think it’s extremely important.
What is the topic of the session?
Suffering risks, also known as S-risks
Who would you like to give the session?
Possible speakers could be Brian Tomasik, Tobias Baumann, Magnus Vinding, Daniel Kokotajlo, or Jesse Cliton, among others.
What is the format of the talk?
The speaker would discuss some of the different scenarios in which astronomical suffering on a cosmic scale could emerge, such as risks from malevolent actors, a near-miss in AI alignment, and suffering-spreading space colonization. They would then discuss possible strategies for reducing S-risks, and some of the open questions related to S-risks and how to prevent them.
Why is it important?
So that worse that death scenarios can be avoided if possible.
Talking about X-risks—What is the best way to communicate about the importance of preventing global catastrophic risks, with people who have previously not heard of them before?
I’m not sure about the best format, perhaps a workshop. But I think this is very useful both for introducing people to EA in a non-offputting way, and also for decreasing the social cost of pursuing careers on this topic.