One thing I noticed is that EA leaders seem to be concerned with both excessive intellectual weirdness and also excessive intellectual centralization. Was this a matter of leaders disagreeing with one another, or did some leaders express both positions?
There isn’t necessarily a contradiction in expressing both positions. For example, perhaps there’s an intellectual center and it’s too weird. (Though, if the weirdness comes in the form of “People saying crazy stuff online”, this explanation seems less likely.) You could also argue that we are open to weird ideas, just not the right weird ideas.
But I think it could be interesting to try & make this tradeoff more explicit in future surveys. It seems plausible that the de facto result of announcing survey results such as these is to move us in some direction along a single coarse intellectual centralization/decentralization dimension. (As I said, there might be a way to square this circle, but if so I think you want a longer post explaining how, not a survey like this.)
Another thought is that EA leaders are going to suggest nudges based on the position they perceive us to be occupying along a particular dimension—but perceptions may differ. Maybe one leader says “we need more talk and less action”, and another leader says “we need less talk and more action”, but they both agree on the ideal talk/action balance, they just disagree about the current balance (because they’ve made different observations about the current balance).
One way to address this problem in general for some dimension X is to have a rubric with 5 written descriptions of levels of X the community could aim for, and ask each leader to select the level of X that seems optimal to them. Another advantage of this scheme is if there’s a fair amount of community variation in levels of X, the community could be below the optimal level of X on average, but if leaders publicly announce that levels of X should move up (without having specified a target level), people who are already above the ideal level of X might move even further above the ideal level.
One thing I noticed is that EA leaders seem to be concerned with both excessive intellectual weirdness and also excessive intellectual centralization. Was this a matter of leaders disagreeing with one another, or did some leaders express both positions?
In general, I think most people chose one concern or the other, but certainly not all of them. If I had to summarize a sort of aggregate position (a hazardous exercise, of course), I’d say:
“I wish we had more people in the community building on well-regarded work to advance our understanding of established cause areas. It seems like a lot of the new work that gets done (at least outside of the largest orgs) involves very speculative causes and doesn’t apply the same standards of rigor used by, say, GiveWell. Meanwhile, when it comes to the major established cause areas, people seem content to listen to orgs’ recommendations without trying to push things forward, which creates a sense of stagnation.”
Again, this is an aggregate: I don’t think any single attendee would approve of exactly this statement as a match for their beliefs.
But I think it could be interesting to try & make this tradeoff more explicit in future surveys.
That’s an interesting idea! We’ll revisit this post the next time we decide to send out this survey (it’s not necessarily something we’ll do every year), and hopefully whoever’s in charge will consider how to let respondents prioritize avoiding weirdness vs. avoiding centralization.
Another thought is that EA leaders are going to suggest nudges based on the position they perceive us to be occupying along a particular dimension—but perceptions may differ.
This also seems likely to be a factor behind the distribution of responses.
Thanks for the aggregate position summary! I’d be interested to hear more about the motivation behind that wish, as it seems likely to me that doing shallow investigations of very speculative causes would actually be the comparative advantage of people who aren’t employed at existing EA organizations. I’m especially curious given the high probability that people assigned to the existence of a Cause X that should be getting so much resources. It seems like having people who don’t work at existing EA organizations (and are thus relatively unaffected by existing blind spots) do shallow investigations of very speculative causes would be just the thing for discovering Cause X.
For a while now I’ve been thinking that the crowdsourcing of alternate perspectives (“breadth-first” rather than “depth-first” exploration of idea space) is one of the internet’s greatest strengths. (I also suspect “breadth-first” idea exploration is underrated in general.) On the flip side, I’d say one of the internet’s greatest weaknesses is the ease with which disagreements become unnecessarily dramatic. So I think if someone was to do a meta-analysis of recent literature on, say, whether remittances are actually good for developing economies in the long run (critiquing GiveDirectly—btw, I couldn’t find any reference to academic research on the impact of remittances on Givewell’s current GiveDirectly profile—maybe they just didn’t think to look it up—case study in the value of an alternate perspective?), or whether usage of malaria bed nets for fishing is increasing or not (critiquing AMF), there’s a sense in which we’d be playing against the strengths of the medium. Anyway, if organizations wanted critical feedback on their work, they could easily request that critical feedback publicly (solicited critical feedback is less likely to cause drama / bad feelings that unsolicited critical feedback), or even offer cash prizes for best critiques, and I see few cases of organizations doing that.
Maybe part of what’s going on is that shallow investigations of very speculative causes only rarely amount to something? See this previous comment of mine for more discussion.
Thanks for this post!
One thing I noticed is that EA leaders seem to be concerned with both excessive intellectual weirdness and also excessive intellectual centralization. Was this a matter of leaders disagreeing with one another, or did some leaders express both positions?
There isn’t necessarily a contradiction in expressing both positions. For example, perhaps there’s an intellectual center and it’s too weird. (Though, if the weirdness comes in the form of “People saying crazy stuff online”, this explanation seems less likely.) You could also argue that we are open to weird ideas, just not the right weird ideas.
But I think it could be interesting to try & make this tradeoff more explicit in future surveys. It seems plausible that the de facto result of announcing survey results such as these is to move us in some direction along a single coarse intellectual centralization/decentralization dimension. (As I said, there might be a way to square this circle, but if so I think you want a longer post explaining how, not a survey like this.)
Another thought is that EA leaders are going to suggest nudges based on the position they perceive us to be occupying along a particular dimension—but perceptions may differ. Maybe one leader says “we need more talk and less action”, and another leader says “we need less talk and more action”, but they both agree on the ideal talk/action balance, they just disagree about the current balance (because they’ve made different observations about the current balance).
One way to address this problem in general for some dimension X is to have a rubric with 5 written descriptions of levels of X the community could aim for, and ask each leader to select the level of X that seems optimal to them. Another advantage of this scheme is if there’s a fair amount of community variation in levels of X, the community could be below the optimal level of X on average, but if leaders publicly announce that levels of X should move up (without having specified a target level), people who are already above the ideal level of X might move even further above the ideal level.
In general, I think most people chose one concern or the other, but certainly not all of them. If I had to summarize a sort of aggregate position (a hazardous exercise, of course), I’d say:
“I wish we had more people in the community building on well-regarded work to advance our understanding of established cause areas. It seems like a lot of the new work that gets done (at least outside of the largest orgs) involves very speculative causes and doesn’t apply the same standards of rigor used by, say, GiveWell. Meanwhile, when it comes to the major established cause areas, people seem content to listen to orgs’ recommendations without trying to push things forward, which creates a sense of stagnation.”
Again, this is an aggregate: I don’t think any single attendee would approve of exactly this statement as a match for their beliefs.
That’s an interesting idea! We’ll revisit this post the next time we decide to send out this survey (it’s not necessarily something we’ll do every year), and hopefully whoever’s in charge will consider how to let respondents prioritize avoiding weirdness vs. avoiding centralization.
This also seems likely to be a factor behind the distribution of responses.
Thanks for the aggregate position summary! I’d be interested to hear more about the motivation behind that wish, as it seems likely to me that doing shallow investigations of very speculative causes would actually be the comparative advantage of people who aren’t employed at existing EA organizations. I’m especially curious given the high probability that people assigned to the existence of a Cause X that should be getting so much resources. It seems like having people who don’t work at existing EA organizations (and are thus relatively unaffected by existing blind spots) do shallow investigations of very speculative causes would be just the thing for discovering Cause X.
For a while now I’ve been thinking that the crowdsourcing of alternate perspectives (“breadth-first” rather than “depth-first” exploration of idea space) is one of the internet’s greatest strengths. (I also suspect “breadth-first” idea exploration is underrated in general.) On the flip side, I’d say one of the internet’s greatest weaknesses is the ease with which disagreements become unnecessarily dramatic. So I think if someone was to do a meta-analysis of recent literature on, say, whether remittances are actually good for developing economies in the long run (critiquing GiveDirectly—btw, I couldn’t find any reference to academic research on the impact of remittances on Givewell’s current GiveDirectly profile—maybe they just didn’t think to look it up—case study in the value of an alternate perspective?), or whether usage of malaria bed nets for fishing is increasing or not (critiquing AMF), there’s a sense in which we’d be playing against the strengths of the medium. Anyway, if organizations wanted critical feedback on their work, they could easily request that critical feedback publicly (solicited critical feedback is less likely to cause drama / bad feelings that unsolicited critical feedback), or even offer cash prizes for best critiques, and I see few cases of organizations doing that.
Maybe part of what’s going on is that shallow investigations of very speculative causes only rarely amount to something? See this previous comment of mine for more discussion.