I think we’ve gotten a bunch more ambitious over the years. It feels like in the early days we thought we’d only get traction encouraging people to take really specific, concrete actions: for example, donating to demonstrably more effective global development interventions. Whereas it seemed that actually people found the broader ideas of effective altruism appealing. Plus EA’s research agenda seems to have gotten more ambitious—rather than only trying to figure out what current charities have most effect, EAs are now trying to figure out what we can do that will improve the long run future as much as possible. And again it feels as if the research there has shifted in the more difficult and general direction: how to make sure that transformative AI is developed safely is an incredibly difficult challenge but feels a bit more concrete and contained than how to prevent global power war and how to build the best long-term institutions (though this difference might well be simply how I think of these problems). In the early days these latter problems were discussed, but it didn’t feel as if we had much constructive to say about them, beyond having done a bit of research into things like how to become a politician.
I think we’ve gotten a bunch more ambitious over the years. It feels like in the early days we thought we’d only get traction encouraging people to take really specific, concrete actions: for example, donating to demonstrably more effective global development interventions. Whereas it seemed that actually people found the broader ideas of effective altruism appealing. Plus EA’s research agenda seems to have gotten more ambitious—rather than only trying to figure out what current charities have most effect, EAs are now trying to figure out what we can do that will improve the long run future as much as possible. And again it feels as if the research there has shifted in the more difficult and general direction: how to make sure that transformative AI is developed safely is an incredibly difficult challenge but feels a bit more concrete and contained than how to prevent global power war and how to build the best long-term institutions (though this difference might well be simply how I think of these problems). In the early days these latter problems were discussed, but it didn’t feel as if we had much constructive to say about them, beyond having done a bit of research into things like how to become a politician.