EA is vulnerable to groupthink, echo chambers, and excessive deference to authority.
A bunch of big EA mistakes and failures were perhaps (partly) due to these things.
A lot of external criticism of EA stems back to this.
I’m a bit skeptical that funding small projects that try to tackle this are really stronger than other community-building work on the margin. Is there an example of a small project focused on epistemics that had a really meaningful impact? Perhaps by steering an important decision or helping someone (re)consider pursuing high-impact work?
I’m worried there’s not a strong track record here. Maybe you want to do some exploratory funding here, but I’m still interested in what you think the outcomes might be.
Mm they don’t necessarily need to be small! (Ofc, big projects often start small, and our funding is more likely to look like early/seed funding in these instances.) E.g. I’m thinking of LessWrong or something like that. A concrete example of a smaller project would be ESPR/SPARC, which have a substantial (albeit not sole) focus on epistemics and rationality, that have had some good evidence of positive effects, e.g. on Open Phil’s longtermism survey.
But I do think the impacts might be more diffuse than other grants. E.g. we won’t necessarily be able to count participants, look at quality, and compare to other programmes we’ve funded.
Some of the sorts of outcomes I have in mind are just things like altered cause prioritisation, different projects getting funded, generally better decision-making.
I expect we would in practice judge whether these seemed on track to be useful by a combination of (1) case studies/stories of specific users and the changes they made (2) statistics about usage.
(I do like your questions/pushback though; it’s making me realise that this is all a bit vague and maybe when push comes to shove with certain applications that fit into this category, I could end up confused about the theory of change and not wanting to fund.)
I don’t know much about LW/ESPR/SPARC but I suspect a lot of their impact flows through convincing people of important ideas and/or the social aspect rather than their impact on community epistemics/integrity?
Some of the sorts of outcomes I have in mind are just things like altered cause prioritisation, different projects getting funded, generally better decision-making.
Similarly, if the goal is to help people think about cause prioritisation, I think fairly standard EA retreats / fellowships are quite good at this? I’m not sure we need some intermediary step like “improve community epistemics”.
Appreciate you responding and tracking this concern though!
I think fairly standard EA retreats / fellowships are quite good at this
Maybe. To take cause prio as an example, my impression is that the framing is often a bit more like: ‘here are lots of cause areas EAs think are high impact! Also, cause prioritisation might be v important.’ (That’s basically how I interpret the vibe and emphasis of the EA Handbook / EAVP.) Not so much ‘cause prio is really important. Let’s actually try and do that and think carefully about how to do this well, without just deferring to existing people’s views.’
So there’s a direct ^ version like that that I’d be excited about.
Although perhaps contradictorily I’m also envisaging something even more indirect than the retreats/fellowships you mention as a possibility, where the impact comes through generally developing skills that enable people to be top contributors to EA thinking, top cause areas, etc.
I don’t know much about LW/ESPR/SPARC but I suspect a lot of their impact flows through convincing people of important ideas and/or the social aspect rather than their impact on community epistemics/integrity?
Yeah I think this is part of it. But I also think that they help by getting people to think carefully and arrive at sensible and better processes/opinions.
I’m a bit skeptical that funding small projects that try to tackle this are really stronger than other community-building work on the margin. Is there an example of a small project focused on epistemics that had a really meaningful impact? Perhaps by steering an important decision or helping someone (re)consider pursuing high-impact work?
I’m worried there’s not a strong track record here. Maybe you want to do some exploratory funding here, but I’m still interested in what you think the outcomes might be.
Mm they don’t necessarily need to be small! (Ofc, big projects often start small, and our funding is more likely to look like early/seed funding in these instances.) E.g. I’m thinking of LessWrong or something like that. A concrete example of a smaller project would be ESPR/SPARC, which have a substantial (albeit not sole) focus on epistemics and rationality, that have had some good evidence of positive effects, e.g. on Open Phil’s longtermism survey.
But I do think the impacts might be more diffuse than other grants. E.g. we won’t necessarily be able to count participants, look at quality, and compare to other programmes we’ve funded.
Some of the sorts of outcomes I have in mind are just things like altered cause prioritisation, different projects getting funded, generally better decision-making.
I expect we would in practice judge whether these seemed on track to be useful by a combination of (1) case studies/stories of specific users and the changes they made (2) statistics about usage.
(I do like your questions/pushback though; it’s making me realise that this is all a bit vague and maybe when push comes to shove with certain applications that fit into this category, I could end up confused about the theory of change and not wanting to fund.)
Thanks!
I don’t know much about LW/ESPR/SPARC but I suspect a lot of their impact flows through convincing people of important ideas and/or the social aspect rather than their impact on community epistemics/integrity?
Similarly, if the goal is to help people think about cause prioritisation, I think fairly standard EA retreats / fellowships are quite good at this? I’m not sure we need some intermediary step like “improve community epistemics”.
Appreciate you responding and tracking this concern though!
Maybe. To take cause prio as an example, my impression is that the framing is often a bit more like: ‘here are lots of cause areas EAs think are high impact! Also, cause prioritisation might be v important.’ (That’s basically how I interpret the vibe and emphasis of the EA Handbook / EAVP.) Not so much ‘cause prio is really important. Let’s actually try and do that and think carefully about how to do this well, without just deferring to existing people’s views.’
So there’s a direct ^ version like that that I’d be excited about.
Although perhaps contradictorily I’m also envisaging something even more indirect than the retreats/fellowships you mention as a possibility, where the impact comes through generally developing skills that enable people to be top contributors to EA thinking, top cause areas, etc.
Yeah I think this is part of it. But I also think that they help by getting people to think carefully and arrive at sensible and better processes/opinions.