Adding onto this, the Virtual Programs (Introductory) currently has 3 weeks dedicated to Longtermism, Existential Risks and Emerging Technologies whereas there are little to no compulsory content on poverty, global health or climate change. (except Pandemics) Many of my participants have voiced out on this. If facilitators are not able to give a good answer, it can be easy for newcomers to have a skewed perspective that EA is just longtermism and x-risk.
In particular, this comment by Max Dalton. While I don’t think that means “Only AI safety matters” I think it would lead to much more content on AI safety than I expected.
Where we have to decide a content split (e.g. for EA Global or the Handbook), I want CEA to represent the range of expert views on cause prioritization. I still don’t think we have amazing data on this, but my best guess is that this skews towards longtermist-motivated or X-risk work (like maybe 70-80%).
Its not specific communications so much as it is the level of activity around specific causes. How many posts and how much discussion time is spent on AI and other cool intellectual things, vs. more mundane but important things like malaria. Danger of being seen as just a way for people to morally justify doing the kind of things they already want to do.
I don’t think we have ever said this, but the is what some people (eg Timnit Gebru) have come to believe. That was why, as the EA community grows and becomes more widely known, it is important to get the message of what we believe right.
Can you give an example of communication that you feel suggests “only AI safety matters”?
Not exactly the same thing, but there was a whole post and discussion on whether EA is “just longtermism” last week.
Adding onto this, the Virtual Programs (Introductory) currently has 3 weeks dedicated to Longtermism, Existential Risks and Emerging Technologies whereas there are little to no compulsory content on poverty, global health or climate change. (except Pandemics) Many of my participants have voiced out on this. If facilitators are not able to give a good answer, it can be easy for newcomers to have a skewed perspective that EA is just longtermism and x-risk.
In particular, this comment by Max Dalton. While I don’t think that means “Only AI safety matters” I think it would lead to much more content on AI safety than I expected.
https://forum.effectivealtruism.org/posts/LRmEezoeeqGhkWm2p/is-ea-just-longtermism-now-1#2_1_Funding_has_indeed_increased__but_what_exactly_is_contributing_to_the_view_that_EA_essentially_is_longtermism_AI_Safety_
Its not specific communications so much as it is the level of activity around specific causes. How many posts and how much discussion time is spent on AI and other cool intellectual things, vs. more mundane but important things like malaria. Danger of being seen as just a way for people to morally justify doing the kind of things they already want to do.
I don’t think we have ever said this, but the is what some people (eg Timnit Gebru) have come to believe. That was why, as the EA community grows and becomes more widely known, it is important to get the message of what we believe right.
See also the link by Michael above.