I think critics see it as a “sharp left turn” in the AI Alignment sense, where the longtermist values were there all along but were much more dormant while EA was less powerful.
Not necessarily a deliberate strategy though—my model is that EA started out fairly cause-neutral, people had lots of discussions about the best causes, and longtermist causes gradually emerged as the best.
I consider the general cause of “looking for ways that philanthropic dollars can reduce direct threats of global catastrophic risks, particularly those that involve some risk of human extinction” to be a relatively high-potential cause. It is on the working agenda for GiveWell Labs and we will be writing more about it.
I think a lot of people moved from “I agree others matter regardless of where, or when, they are but figuring out how to help people in the future isn’t very tractable” to “ok, now I see some ways to do this, and it’s important enough that we really need to try”.
Or maybe this was just my trajectory (2011, 2018, 2022) and I’m projecting a bit...
Not necessarily a deliberate strategy though—my model is that EA started out fairly cause-neutral, people had lots of discussions about the best causes, and longtermist causes gradually emerged as the best.
E.g. in 2012 Holden Karnofsky wrote:
I think a lot of people moved from “I agree others matter regardless of where, or when, they are but figuring out how to help people in the future isn’t very tractable” to “ok, now I see some ways to do this, and it’s important enough that we really need to try”.
Or maybe this was just my trajectory (2011, 2018, 2022) and I’m projecting a bit...