One thing I noticed is that EA leaders seem to be concerned with both excessive intellectual weirdness and also excessive intellectual centralization. Was this a matter of leaders disagreeing with one another, or did some leaders express both positions?
In general, I think most people chose one concern or the other, but certainly not all of them. If I had to summarize a sort of aggregate position (a hazardous exercise, of course), Iād say:
āI wish we had more people in the community building on well-regarded work to advance our understanding of established cause areas. It seems like a lot of the new work that gets done (at least outside of the largest orgs) involves very speculative causes and doesnāt apply the same standards of rigor used by, say, GiveWell. Meanwhile, when it comes to the major established cause areas, people seem content to listen to orgsā recommendations without trying to push things forward, which creates a sense of stagnation.ā
Again, this is an aggregate: I donāt think any single attendee would approve of exactly this statement as a match for their beliefs.
But I think it could be interesting to try & make this tradeoff more explicit in future surveys.
Thatās an interesting idea! Weāll revisit this post the next time we decide to send out this survey (itās not necessarily something weāll do every year), and hopefully whoeverās in charge will consider how to let respondents prioritize avoiding weirdness vs. avoiding centralization.
Another thought is that EA leaders are going to suggest nudges based on the position they perceive us to be occupying along a particular dimensionābut perceptions may differ.
This also seems likely to be a factor behind the distribution of responses.
Thanks for the aggregate position summary! Iād be interested to hear more about the motivation behind that wish, as it seems likely to me that doing shallow investigations of very speculative causes would actually be the comparative advantage of people who arenāt employed at existing EA organizations. Iām especially curious given the high probability that people assigned to the existence of a Cause X that should be getting so much resources. It seems like having people who donāt work at existing EA organizations (and are thus relatively unaffected by existing blind spots) do shallow investigations of very speculative causes would be just the thing for discovering Cause X.
For a while now Iāve been thinking that the crowdsourcing of alternate perspectives (ābreadth-firstā rather than ādepth-firstā exploration of idea space) is one of the internetās greatest strengths. (I also suspect ābreadth-firstā idea exploration is underrated in general.) On the flip side, Iād say one of the internetās greatest weaknesses is the ease with which disagreements become unnecessarily dramatic. So I think if someone was to do a meta-analysis of recent literature on, say, whether remittances are actually good for developing economies in the long run (critiquing GiveDirectlyābtw, I couldnāt find any reference to academic research on the impact of remittances on Givewellās current GiveDirectly profileāmaybe they just didnāt think to look it upācase study in the value of an alternate perspective?), or whether usage of malaria bed nets for fishing is increasing or not (critiquing AMF), thereās a sense in which weād be playing against the strengths of the medium. Anyway, if organizations wanted critical feedback on their work, they could easily request that critical feedback publicly (solicited critical feedback is less likely to cause drama /ā bad feelings that unsolicited critical feedback), or even offer cash prizes for best critiques, and I see few cases of organizations doing that.
Maybe part of whatās going on is that shallow investigations of very speculative causes only rarely amount to something? See this previous comment of mine for more discussion.
In general, I think most people chose one concern or the other, but certainly not all of them. If I had to summarize a sort of aggregate position (a hazardous exercise, of course), Iād say:
āI wish we had more people in the community building on well-regarded work to advance our understanding of established cause areas. It seems like a lot of the new work that gets done (at least outside of the largest orgs) involves very speculative causes and doesnāt apply the same standards of rigor used by, say, GiveWell. Meanwhile, when it comes to the major established cause areas, people seem content to listen to orgsā recommendations without trying to push things forward, which creates a sense of stagnation.ā
Again, this is an aggregate: I donāt think any single attendee would approve of exactly this statement as a match for their beliefs.
Thatās an interesting idea! Weāll revisit this post the next time we decide to send out this survey (itās not necessarily something weāll do every year), and hopefully whoeverās in charge will consider how to let respondents prioritize avoiding weirdness vs. avoiding centralization.
This also seems likely to be a factor behind the distribution of responses.
Thanks for the aggregate position summary! Iād be interested to hear more about the motivation behind that wish, as it seems likely to me that doing shallow investigations of very speculative causes would actually be the comparative advantage of people who arenāt employed at existing EA organizations. Iām especially curious given the high probability that people assigned to the existence of a Cause X that should be getting so much resources. It seems like having people who donāt work at existing EA organizations (and are thus relatively unaffected by existing blind spots) do shallow investigations of very speculative causes would be just the thing for discovering Cause X.
For a while now Iāve been thinking that the crowdsourcing of alternate perspectives (ābreadth-firstā rather than ādepth-firstā exploration of idea space) is one of the internetās greatest strengths. (I also suspect ābreadth-firstā idea exploration is underrated in general.) On the flip side, Iād say one of the internetās greatest weaknesses is the ease with which disagreements become unnecessarily dramatic. So I think if someone was to do a meta-analysis of recent literature on, say, whether remittances are actually good for developing economies in the long run (critiquing GiveDirectlyābtw, I couldnāt find any reference to academic research on the impact of remittances on Givewellās current GiveDirectly profileāmaybe they just didnāt think to look it upācase study in the value of an alternate perspective?), or whether usage of malaria bed nets for fishing is increasing or not (critiquing AMF), thereās a sense in which weād be playing against the strengths of the medium. Anyway, if organizations wanted critical feedback on their work, they could easily request that critical feedback publicly (solicited critical feedback is less likely to cause drama /ā bad feelings that unsolicited critical feedback), or even offer cash prizes for best critiques, and I see few cases of organizations doing that.
Maybe part of whatās going on is that shallow investigations of very speculative causes only rarely amount to something? See this previous comment of mine for more discussion.