Another historical point I’d like to make is that the common narrative about EA’s recent “pivot to longtermism” seems mostly wrong to me, or at least more partial and gradual than it’s often presented to be, because all four leading strands of EA — (1) neartermist human-focused stuff, mostly in the developing world, (2) animal welfare, (3) long-term future, and (4) meta — were all major themes in the movement since its relatively early days, including at the very first “EA Summit” in 2013 (see here), and IIRC for at least a few years before then.
MacAskill was definitely a longtermist in 2012. But I don’t think he mentioned it in Doing Good Better, or any of the more public/introductory narrative around EA.
I think the “pivot to longermism” narrative is a reaction to a change in communication strategy (80000 hours becoming explicitly longtermist, EA intro materials becoming mostly longtermist). I think critics see it as a “sharp left turn” in the AI Alignment sense, where the longtermist values were there all along but were much more dormant while EA was less powerful.
I think critics see it as a “sharp left turn” in the AI Alignment sense, where the longtermist values were there all along but were much more dormant while EA was less powerful.
Not necessarily a deliberate strategy though—my model is that EA started out fairly cause-neutral, people had lots of discussions about the best causes, and longtermist causes gradually emerged as the best.
I consider the general cause of “looking for ways that philanthropic dollars can reduce direct threats of global catastrophic risks, particularly those that involve some risk of human extinction” to be a relatively high-potential cause. It is on the working agenda for GiveWell Labs and we will be writing more about it.
I think a lot of people moved from “I agree others matter regardless of where, or when, they are but figuring out how to help people in the future isn’t very tractable” to “ok, now I see some ways to do this, and it’s important enough that we really need to try”.
Or maybe this was just my trajectory (2011, 2018, 2022) and I’m projecting a bit...
I don’t think anyone is denying that longtermist and existential risk concerns were part of the movement from the beginning. Or think that longtermist concerns don’t belong in a movement about doing the most good. I think the concern is around the shift from longtermist concerns existing relatively equally with other cause areas to becoming much more dominant. Longtermism is much more prominent both in terms of funding and attention given to longtermism in community growth and introductory materials.
Another historical point I’d like to make is that the common narrative about EA’s recent “pivot to longtermism” seems mostly wrong to me, or at least more partial and gradual than it’s often presented to be, because all four leading strands of EA — (1) neartermist human-focused stuff, mostly in the developing world, (2) animal welfare, (3) long-term future, and (4) meta — were all major themes in the movement since its relatively early days, including at the very first “EA Summit” in 2013 (see here), and IIRC for at least a few years before then.
MacAskill was definitely a longtermist in 2012. But I don’t think he mentioned it in Doing Good Better, or any of the more public/introductory narrative around EA.
I think the “pivot to longermism” narrative is a reaction to a change in communication strategy (80000 hours becoming explicitly longtermist, EA intro materials becoming mostly longtermist). I think critics see it as a “sharp left turn” in the AI Alignment sense, where the longtermist values were there all along but were much more dormant while EA was less powerful.
There’s a previous discussion here
Not necessarily a deliberate strategy though—my model is that EA started out fairly cause-neutral, people had lots of discussions about the best causes, and longtermist causes gradually emerged as the best.
E.g. in 2012 Holden Karnofsky wrote:
I think a lot of people moved from “I agree others matter regardless of where, or when, they are but figuring out how to help people in the future isn’t very tractable” to “ok, now I see some ways to do this, and it’s important enough that we really need to try”.
Or maybe this was just my trajectory (2011, 2018, 2022) and I’m projecting a bit...
I don’t think anyone is denying that longtermist and existential risk concerns were part of the movement from the beginning. Or think that longtermist concerns don’t belong in a movement about doing the most good. I think the concern is around the shift from longtermist concerns existing relatively equally with other cause areas to becoming much more dominant. Longtermism is much more prominent both in terms of funding and attention given to longtermism in community growth and introductory materials.