I’m excited to read any list you come up with at the end of this!
Some I thought of:
How likely is it that we’re living at the most influential time in history?
What is the total x-risk this century?
Are we saving/investing enough for the future?
How much less of an x-risk is AI if there is no “fast takeoff”? If the paper clip scenario is super unlikely? and how unlikely are those things? [can sum up question: how much should we be updating on the risk from AI due to some people updating away from Bostrom-style scenarios?]
how important are S-risks/should we place more emphasis on reducing suffering than on creating happiness?
Do anti-aging research, animal welfare work, and/or economic growth speedups have positive very long term benefits in expectation?
Should EA stay as a “big tent” or split up into different movements?
Re: 9 - I wrote this back in April 2019. There have been more recent comments from Will in his AMA, and Toby in this EA Global talk (link with timestamp).
I’m excited to read any list you come up with at the end of this!
Some I thought of:
How likely is it that we’re living at the most influential time in history?
What is the total x-risk this century?
Are we saving/investing enough for the future?
How much less of an x-risk is AI if there is no “fast takeoff”? If the paper clip scenario is super unlikely? and how unlikely are those things? [can sum up question: how much should we be updating on the risk from AI due to some people updating away from Bostrom-style scenarios?]
how important are S-risks/should we place more emphasis on reducing suffering than on creating happiness?
Do anti-aging research, animal welfare work, and/or economic growth speedups have positive very long term benefits in expectation?
Should EA stay as a “big tent” or split up into different movements?
How much should EA be trying to grow?
Does EA pay enough attention to climate change?
Thanks for the list! As a follow-up, I’ll try list places online where such debates have occurred for each entry:
1. https://forum.effectivealtruism.org/posts/XXLf6FmWujkxna3E6/are-we-living-at-the-most-influential-time-in-history-1
2. Toby Ord has estimates in The Precipice. I assume most discussion occurs on specific risks.
3. Lots of discussion on this; summary here: https://forum.effectivealtruism.org/posts/7uJcBNZhinomKtH9p/giving-now-vs-later-a-summary . Also more recently https://forum.effectivealtruism.org/posts/amdReARfSvgf5PpKK/phil-trammell-philanthropy-timing-and-the-hinge-of-history
4. Best discussion of this is probably here: https://www.lesswrong.com/posts/HBxe6wdjxK239zajf/what-failure-looks-like
5. Most stuff on https://longtermrisk.org/ addresses s-risks. In terms of pushback, Carl Shulman wrote http://reflectivedisequilibrium.blogspot.com/2012/03/are-pain-and-pleasure-equally-energy.html and Toby Ord wrote http://www.amirrorclear.net/academic/ideas/negative-utilitarianism/ (although I don’t find either compelling). Also a lot of Simon Knutsson’s stuff, e.g. https://www.simonknutsson.com/thoughts-on-ords-why-im-not-a-negative-utilitarian
6a. https://forum.effectivealtruism.org/posts/LxmJJobC6DEneYSWB/effects-of-anti-aging-research-on-the-long-term-future , https://forum.effectivealtruism.org/posts/jYMdWskbrTWFXG6dH/a-general-framework-for-evaluating-aging-research-part-1
6b. https://forum.effectivealtruism.org/posts/W5AGTHm4pTd6TeEP3/should-longtermists-mostly-think-about-animals , https://forum.effectivealtruism.org/posts/ndvcrHfvay7sKjJGn/human-and-animal-interventions-the-long-term-view
6c. https://forum.effectivealtruism.org/posts/xh37hSqw287ufDbQ7/existential-risk-and-economic-growth-1
7. Nothing particularly comes to mind, although I assume there’s stuff out there.
8. https://80000hours.org/2020/02/anonymous-answers-effective-altruism-community-and-growth/
9. E.g. here, which also links to more discussions: https://forum.effectivealtruism.org/posts/NLJpMEST6pJhyq99S/notes-could-climate-change-make-earth-uninhabitable-for
Re: 9 - I wrote this back in April 2019. There have been more recent comments from Will in his AMA, and Toby in this EA Global talk (link with timestamp).