You have been part of the effective altruism movement since its inception. What are some interesting or important ways in which you think EA has changed over the years?
I think we’ve gotten a bunch more ambitious over the years. It feels like in the early days we thought we’d only get traction encouraging people to take really specific, concrete actions: for example, donating to demonstrably more effective global development interventions. Whereas it seemed that actually people found the broader ideas of effective altruism appealing. Plus EA’s research agenda seems to have gotten more ambitious—rather than only trying to figure out what current charities have most effect, EAs are now trying to figure out what we can do that will improve the long run future as much as possible. And again it feels as if the research there has shifted in the more difficult and general direction: how to make sure that transformative AI is developed safely is an incredibly difficult challenge but feels a bit more concrete and contained than how to prevent global power war and how to build the best long-term institutions (though this difference might well be simply how I think of these problems). In the early days these latter problems were discussed, but it didn’t feel as if we had much constructive to say about them, beyond having done a bit of research into things like how to become a politician.
We seem to have gotten better at engaging with experts and other communities. In the early days, it felt as if a large part of EA narrative was ‘look at all these things the rest of the world is getting wrong’. That might have been partly necessary for carving out a niche, and was usually picking up on something true. But it wasn’t a great way of engaging with others. Whereas now it seems like we do a better job of finding out what others are doing really well that we want to learn about and build on (eg with things like speakers at EA global and the interviews on the 80,000 Hours podcast)
I worry that there’s a bit more antagonism and unfriendliness in the movement now. I think this is mostly just due to it being bigger: when there are few enough of you, you all know each other somewhat and so are likely to give each other the benefit of the doubt. Whereas when lots of people only know each other online, it feels easier to assume the worst of each other. Plus engaging online rather than in person tends to be less friendly in general. I’m not sure if this is a real effect though.
You have been part of the effective altruism movement since its inception. What are some interesting or important ways in which you think EA has changed over the years?
I think we’ve gotten a bunch more ambitious over the years. It feels like in the early days we thought we’d only get traction encouraging people to take really specific, concrete actions: for example, donating to demonstrably more effective global development interventions. Whereas it seemed that actually people found the broader ideas of effective altruism appealing. Plus EA’s research agenda seems to have gotten more ambitious—rather than only trying to figure out what current charities have most effect, EAs are now trying to figure out what we can do that will improve the long run future as much as possible. And again it feels as if the research there has shifted in the more difficult and general direction: how to make sure that transformative AI is developed safely is an incredibly difficult challenge but feels a bit more concrete and contained than how to prevent global power war and how to build the best long-term institutions (though this difference might well be simply how I think of these problems). In the early days these latter problems were discussed, but it didn’t feel as if we had much constructive to say about them, beyond having done a bit of research into things like how to become a politician.
We seem to have gotten better at engaging with experts and other communities. In the early days, it felt as if a large part of EA narrative was ‘look at all these things the rest of the world is getting wrong’. That might have been partly necessary for carving out a niche, and was usually picking up on something true. But it wasn’t a great way of engaging with others. Whereas now it seems like we do a better job of finding out what others are doing really well that we want to learn about and build on (eg with things like speakers at EA global and the interviews on the 80,000 Hours podcast)
I worry that there’s a bit more antagonism and unfriendliness in the movement now. I think this is mostly just due to it being bigger: when there are few enough of you, you all know each other somewhat and so are likely to give each other the benefit of the doubt. Whereas when lots of people only know each other online, it feels easier to assume the worst of each other. Plus engaging online rather than in person tends to be less friendly in general. I’m not sure if this is a real effect though.