A relative outsider, my understanding of EA formed around its online content, which emphasises utilitarianism and longtermism. Whenever speaking to EA’s in person, I’m often surprised that these perspectives are more weakly held by community members (and leaders?) than I expected. I think there are messaging issues here. Part of the issue might be that longtermist causes are more interesting to write and talk about. We should be careful to allocate attention to cause areas proportional to their significance.
Too much of the ecosystem feels dependent on a few grantmakers / re-granters. It concentrates too much power in relatively few people’s hands. (At the same time, this seems to be a very hard problem to solve. No particular initiatives come to my mind.)
I see EA’s concerns with reputational risk and optics as flaws with its overly utilitarian perspective. Manipulating the narrative has short-term reputational benefits and hidden long-term costs.
At the same time, I am sceptical of EA’s ability to adequately address these issues. Such concerns have been previously raised without significant change. It feels like many of these issues have arisen due to the centralisation of power and the over-weighting of community leaders’ opinions, yet simultaneously the community is sufficiently de-centralised that it’s difficult to coordinate such a change.
That’s interesting, I’ve had the exact opposite experience. I was attracted to EA for similar reasons that Zoe and Ben mention in the article, such as global poverty and health, but then found that everyone I was meeting in the EA community was working on longtermist stuff (AI alignment and safety mostly). We have discussed that perhaps since my club was at a university, it’s possible that most of the university students in the club at the time were just more career aligned with longtermist stuff. I don’t know how accurate that is though.
Many of these concerns resonated with me.
A relative outsider, my understanding of EA formed around its online content, which emphasises utilitarianism and longtermism. Whenever speaking to EA’s in person, I’m often surprised that these perspectives are more weakly held by community members (and leaders?) than I expected. I think there are messaging issues here. Part of the issue might be that longtermist causes are more interesting to write and talk about. We should be careful to allocate attention to cause areas proportional to their significance.
Too much of the ecosystem feels dependent on a few grantmakers / re-granters. It concentrates too much power in relatively few people’s hands. (At the same time, this seems to be a very hard problem to solve. No particular initiatives come to my mind.)
I see EA’s concerns with reputational risk and optics as flaws with its overly utilitarian perspective. Manipulating the narrative has short-term reputational benefits and hidden long-term costs.
At the same time, I am sceptical of EA’s ability to adequately address these issues. Such concerns have been previously raised without significant change. It feels like many of these issues have arisen due to the centralisation of power and the over-weighting of community leaders’ opinions, yet simultaneously the community is sufficiently de-centralised that it’s difficult to coordinate such a change.
That’s interesting, I’ve had the exact opposite experience. I was attracted to EA for similar reasons that Zoe and Ben mention in the article, such as global poverty and health, but then found that everyone I was meeting in the EA community was working on longtermist stuff (AI alignment and safety mostly). We have discussed that perhaps since my club was at a university, it’s possible that most of the university students in the club at the time were just more career aligned with longtermist stuff. I don’t know how accurate that is though.