Adding onto this, the Virtual Programs (Introductory) currently has 3 weeks dedicated to Longtermism, Existential Risks and Emerging Technologies whereas there are little to no compulsory content on poverty, global health or climate change. (except Pandemics) Many of my participants have voiced out on this. If facilitators are not able to give a good answer, it can be easy for newcomers to have a skewed perspective that EA is just longtermism and x-risk.
In particular, this comment by Max Dalton. While I don’t think that means “Only AI safety matters” I think it would lead to much more content on AI safety than I expected.
Where we have to decide a content split (e.g. for EA Global or the Handbook), I want CEA to represent the range of expert views on cause prioritization. I still don’t think we have amazing data on this, but my best guess is that this skews towards longtermist-motivated or X-risk work (like maybe 70-80%).
Not exactly the same thing, but there was a whole post and discussion on whether EA is “just longtermism” last week.
Adding onto this, the Virtual Programs (Introductory) currently has 3 weeks dedicated to Longtermism, Existential Risks and Emerging Technologies whereas there are little to no compulsory content on poverty, global health or climate change. (except Pandemics) Many of my participants have voiced out on this. If facilitators are not able to give a good answer, it can be easy for newcomers to have a skewed perspective that EA is just longtermism and x-risk.
In particular, this comment by Max Dalton. While I don’t think that means “Only AI safety matters” I think it would lead to much more content on AI safety than I expected.
https://forum.effectivealtruism.org/posts/LRmEezoeeqGhkWm2p/is-ea-just-longtermism-now-1#2_1_Funding_has_indeed_increased__but_what_exactly_is_contributing_to_the_view_that_EA_essentially_is_longtermism_AI_Safety_