One thing I noticed is that EA leaders seem to be concerned with both excessive intellectual weirdness and also excessive intellectual centralization. Was this a matter of leaders disagreeing with one another, or did some leaders express both positions?
In general, I think most people chose one concern or the other, but certainly not all of them. If I had to summarize a sort of aggregate position (a hazardous exercise, of course), Iâd say:
âI wish we had more people in the community building on well-regarded work to advance our understanding of established cause areas. It seems like a lot of the new work that gets done (at least outside of the largest orgs) involves very speculative causes and doesnât apply the same standards of rigor used by, say, GiveWell. Meanwhile, when it comes to the major established cause areas, people seem content to listen to orgsâ recommendations without trying to push things forward, which creates a sense of stagnation.â
Again, this is an aggregate: I donât think any single attendee would approve of exactly this statement as a match for their beliefs.
But I think it could be interesting to try & make this tradeoff more explicit in future surveys.
Thatâs an interesting idea! Weâll revisit this post the next time we decide to send out this survey (itâs not necessarily something weâll do every year), and hopefully whoeverâs in charge will consider how to let respondents prioritize avoiding weirdness vs. avoiding centralization.
Another thought is that EA leaders are going to suggest nudges based on the position they perceive us to be occupying along a particular dimensionâbut perceptions may differ.
This also seems likely to be a factor behind the distribution of responses.
Thanks for the aggregate position summary! Iâd be interested to hear more about the motivation behind that wish, as it seems likely to me that doing shallow investigations of very speculative causes would actually be the comparative advantage of people who arenât employed at existing EA organizations. Iâm especially curious given the high probability that people assigned to the existence of a Cause X that should be getting so much resources. It seems like having people who donât work at existing EA organizations (and are thus relatively unaffected by existing blind spots) do shallow investigations of very speculative causes would be just the thing for discovering Cause X.
For a while now Iâve been thinking that the crowdsourcing of alternate perspectives (âbreadth-firstâ rather than âdepth-firstâ exploration of idea space) is one of the internetâs greatest strengths. (I also suspect âbreadth-firstâ idea exploration is underrated in general.) On the flip side, Iâd say one of the internetâs greatest weaknesses is the ease with which disagreements become unnecessarily dramatic. So I think if someone was to do a meta-analysis of recent literature on, say, whether remittances are actually good for developing economies in the long run (critiquing GiveDirectlyâbtw, I couldnât find any reference to academic research on the impact of remittances on Givewellâs current GiveDirectly profileâmaybe they just didnât think to look it upâcase study in the value of an alternate perspective?), or whether usage of malaria bed nets for fishing is increasing or not (critiquing AMF), thereâs a sense in which weâd be playing against the strengths of the medium. Anyway, if organizations wanted critical feedback on their work, they could easily request that critical feedback publicly (solicited critical feedback is less likely to cause drama /â bad feelings that unsolicited critical feedback), or even offer cash prizes for best critiques, and I see few cases of organizations doing that.
Maybe part of whatâs going on is that shallow investigations of very speculative causes only rarely amount to something? See this previous comment of mine for more discussion.
In general, I think most people chose one concern or the other, but certainly not all of them. If I had to summarize a sort of aggregate position (a hazardous exercise, of course), Iâd say:
âI wish we had more people in the community building on well-regarded work to advance our understanding of established cause areas. It seems like a lot of the new work that gets done (at least outside of the largest orgs) involves very speculative causes and doesnât apply the same standards of rigor used by, say, GiveWell. Meanwhile, when it comes to the major established cause areas, people seem content to listen to orgsâ recommendations without trying to push things forward, which creates a sense of stagnation.â
Again, this is an aggregate: I donât think any single attendee would approve of exactly this statement as a match for their beliefs.
Thatâs an interesting idea! Weâll revisit this post the next time we decide to send out this survey (itâs not necessarily something weâll do every year), and hopefully whoeverâs in charge will consider how to let respondents prioritize avoiding weirdness vs. avoiding centralization.
This also seems likely to be a factor behind the distribution of responses.
Thanks for the aggregate position summary! Iâd be interested to hear more about the motivation behind that wish, as it seems likely to me that doing shallow investigations of very speculative causes would actually be the comparative advantage of people who arenât employed at existing EA organizations. Iâm especially curious given the high probability that people assigned to the existence of a Cause X that should be getting so much resources. It seems like having people who donât work at existing EA organizations (and are thus relatively unaffected by existing blind spots) do shallow investigations of very speculative causes would be just the thing for discovering Cause X.
For a while now Iâve been thinking that the crowdsourcing of alternate perspectives (âbreadth-firstâ rather than âdepth-firstâ exploration of idea space) is one of the internetâs greatest strengths. (I also suspect âbreadth-firstâ idea exploration is underrated in general.) On the flip side, Iâd say one of the internetâs greatest weaknesses is the ease with which disagreements become unnecessarily dramatic. So I think if someone was to do a meta-analysis of recent literature on, say, whether remittances are actually good for developing economies in the long run (critiquing GiveDirectlyâbtw, I couldnât find any reference to academic research on the impact of remittances on Givewellâs current GiveDirectly profileâmaybe they just didnât think to look it upâcase study in the value of an alternate perspective?), or whether usage of malaria bed nets for fishing is increasing or not (critiquing AMF), thereâs a sense in which weâd be playing against the strengths of the medium. Anyway, if organizations wanted critical feedback on their work, they could easily request that critical feedback publicly (solicited critical feedback is less likely to cause drama /â bad feelings that unsolicited critical feedback), or even offer cash prizes for best critiques, and I see few cases of organizations doing that.
Maybe part of whatâs going on is that shallow investigations of very speculative causes only rarely amount to something? See this previous comment of mine for more discussion.