What is the topic of the talk?
Who would you like to give the talk?
What is the format of the talk?
Why is it important?
[Question] What EAG sessions would you like on AI?
- 24 Mar 2022 23:45 UTC; 2 points) 's comment on What EAG sessions would you like on AI? by (
- 24 Mar 2022 23:44 UTC; 2 points) 's comment on What EAG sessions would you like on Global Catastrophic Risks? by (
- 24 Mar 2022 23:43 UTC; 2 points) 's comment on What EAG sessions would you like to see on Global Priorities Research? by (
- 24 Mar 2022 23:42 UTC; 2 points) 's comment on What EAG sessions would you like to see on meta-EA? by (
- 24 Mar 2022 23:44 UTC; 2 points) 's comment on What EAG sessions would you like on epistemics? by (
- 24 Mar 2022 23:45 UTC; 2 points) 's comment on What EAG sessions would you like on animal welfare? by (
- 24 Mar 2022 23:45 UTC; 2 points) 's comment on What EAG sessions would you like to see on global health and wellbeing? by (
- 24 Mar 2022 23:43 UTC; 2 points) 's comment on What EAG sessions would you like to see on global health and wellbeing? by (
Here are all the questions in this series:
https://forum.effectivealtruism.org/posts/WQTEuxkXyCy9QFJCb/what-eag-sessions-would-you-like-to-see-on-meta-ea
AI risk for beginners/dummies. I know almost nothing about it, and my guess is I’m not alone.
Does anyone know who would be good for this talk? I don’t.
I think Rob Miles youtube channel is a good resource for beginners, he’s got a lot of nice videos there and he is a good speaker.
Hey Sandy, could you edit your answer and put Rob as a suggested speaker?
I would like to see workshops targeted at people at all different stages of the pipeline (although my expectation is that everyone at EAG would at least know the super basics of what AI risk is and why we might care about it).
So for example you could design a program looking like the following:
How should you prioritise AI Safety? - A workshop designed to help you figure out how important you should consider it as a cause area and whether you should personally focus on it
So you want to work on AI Safety—A talk for people who have decided to work on AI safety to find out about the opportunites in this space
A deep dive events for people already focusing on AI safety to engage with each other on issues of specific importance
Obviously, you could replace these with different events, but the point is to cover all bases.
I prefer if these were three seperate comments so I could upvote them seperately.
It’s one unified idea though and the idea without the examples would be unclear.
What is the topic of the talk?
Suffering risks, also known as S-risks
Who would you like to give the talk?
Possible speakers could be Brian Tomasik, Tobias Baumann, Magnus Vinding, Daniel Kokotajlo, or Jesse Cliton, among others.
What is the format of the talk?
The speaker would discuss some of the different scenarios in which astronomical suffering on a cosmic scale could emerge, such as risks from malevolent actors, a near-miss in AI alignment, and suffering-spreading space colonization. They would then discuss possible strategies for reducing S-risks, and some of the open questions related to S-risks and how to prevent them.
Why is it important?
So that worse that death scenarios can be avoided if possible.
Explain AI risk—Rob Bensinger / Andrew Ngo/ Neel Nanda—Workshop
Split people into pairs and get them to explain AI risk to one another. Then get the other person to explain it back. Then give tips on how the explanation could be simpler. Use slido to take comments on what most people found difficult. Then the speakers answer those. Then try again with a new pair. How do you feel?