These individuals—often senior scholars within the field—told us in private that they were concerned that any critique of central figures in EA would result in an inability to secure funding from EA sources, such as OpenPhilanthropy. We don’t know if these concerns are warranted. Nonetheless, any field that operates under such a chilling effect is neither free nor fair. Having a handful of wealthy donors and their advisors dictate the evolution of an entire field is bad epistemics at best and corruption at worst.
If I imagine myself dependent on the funding of someone, that would change my behaviour. Anyone have any ideas of how to get around this?
- Tenure is the standard academic approach but does that lead to better work overall - A wider set of funders who will fund work even if it attacks the other funders? - OpenPhil making a statement to fund high quality work they disagree with - Some kind of way to anonymously survey EA academics to get a sense of if there is a point that everyone thinks but it too scared to say - Some kind of prediction market on views that are likely to be found to be wrong in the future.
I think offering financial incentives specifically for red teaming makes sense. I tend to think red teaming is systematically undersupplied because people are concerned (often correctly in my experience with EA) that it will cost them social capital, and financial capital can offset that.
I’m a fan of the CEEALAR funding model—giving small amounts to dedicated EAs, with less scrutiny and less prestige distribution. IMO it is less incentive-distorting than more popular EA funding models.
Most these ideas sound interesting to me. However —
- OpenPhil making a statement to fund high quality work they disagree with
I’m not quite sure what this means? I’m reading it as “funding work which looks set to make good progress on a goal OP don’t believe is especially important, or even net bad”. And that doesn’t seem right to me.
Similar ideas that could be good —
OP/other grantmakers clarifying that they will consider funding you on equal terms even if you’ve publicly criticised OP/that grantmaker
More funding for thoughtful criticisms of effective altruism and longtermism (theory and practice)
Perhaps a general “willingness to commit” X % funding to criticism of areas which are heavily funded by the EA-aligned funding organization could work as a general heuristic for enabling the second idea.
(e.g. if “pro current X-risk” research in general gets N funding then some % of N would be made available for “critical work” in the same area. But in science it can be sometimes hard to even say which is a critical work and which is a work that builds on top of existing work.)
How do we solve this?
If I imagine myself dependent on the funding of someone, that would change my behaviour. Anyone have any ideas of how to get around this?
- Tenure is the standard academic approach but does that lead to better work overall
- A wider set of funders who will fund work even if it attacks the other funders?
- OpenPhil making a statement to fund high quality work they disagree with
- Some kind of way to anonymously survey EA academics to get a sense of if there is a point that everyone thinks but it too scared to say
- Some kind of prediction market on views that are likely to be found to be wrong in the future.
I think offering financial incentives specifically for red teaming makes sense. I tend to think red teaming is systematically undersupplied because people are concerned (often correctly in my experience with EA) that it will cost them social capital, and financial capital can offset that.
I’m a fan of the CEEALAR funding model—giving small amounts to dedicated EAs, with less scrutiny and less prestige distribution. IMO it is less incentive-distorting than more popular EA funding models.
Most these ideas sound interesting to me. However —
I’m not quite sure what this means? I’m reading it as “funding work which looks set to make good progress on a goal OP don’t believe is especially important, or even net bad”. And that doesn’t seem right to me.
Similar ideas that could be good —
OP/other grantmakers clarifying that they will consider funding you on equal terms even if you’ve publicly criticised OP/that grantmaker
More funding for thoughtful criticisms of effective altruism and longtermism (theory and practice)
I’m especially keen on the latter!
Perhaps a general “willingness to commit” X % funding to criticism of areas which are heavily funded by the EA-aligned funding organization could work as a general heuristic for enabling the second idea.
(e.g. if “pro current X-risk” research in general gets N funding then some % of N would be made available for “critical work” in the same area. But in science it can be sometimes hard to even say which is a critical work and which is a work that builds on top of existing work.)
Sounds good. At the more granular and practical end, this sounds like red-teaming, which is often just good practice.