I share the surprise and dismay other commenters have expressed about the experience you report around drafting this preprint. While I disagree with many claims and framings in the paper, I agreed with others, and the paper as a whole certainly doesn’t seem beyond the usual pale of academic dissent. I’m not sure what those who advised you not to publish were thinking.
In this comment, I’d like to turn attention to the recommendations you make at the end of this Forum post (rather than the preprint itself). Since these recommendations are somewhat more radical than those you make in the preprint, I think it would be worth discussing them specifically, particularly since in many cases it’s not clear to me exactly what is being proposed.
Having written what follows, I realise it’s quite repetitive, so if in responding you wanted to pick a few points that are particularly important to you and focus on those, I wouldn’t blame you!
You claim that EA needs to...
diversify funding sources by breaking up big funding bodies
Can you explain what this would mean in practice? Most of the big funding bodies are private foundations. How would we go about breaking these up? Who even is “we” in this instance?
[diversify funding sources] by reducing each orgs’ reliance on EA funding and tech billionaire funding
What sorts of funding sources do you think EA orgs should be seeking, other than EA orgs and individual philanthropists (noting that EA-adjacent academic researchers already have access to the government research funding apparatus)?
produce academically credible work
Speaking as a researcher who has spent a lot of time in academia, I think how much I care about work being “academically credible” depends a lot on the field. In many cases, I think post-publication review in places like the Forum is more robust and useful than pre-publication academic review.
Many academic fields (especially in the humanities) seem to have quite bad epistemic and political cultures, and even those that don’t often have very particular ideas of what sorts of problems & approaches are suitable for peer-reviewed articles (e.g. requiring that work be “interesting” or “novel” in particular ways). And the current peer-review system is well-known to be painfully inadequate in many ways.
I don’t want to overstate this – I think there are many cases where the academic publication route is a good option, for many reasons. But I’ve read a lot of pretty bad academic papers in my time, sometimes in prestigious journals, and it’s not all that rare for a Forum report to significantly exceed the quality of the academic literature. I don’t think academic credibility per se is something we should be aiming for for epistemic reasons. But perhaps you had other benefits in mind?
set up whistle-blower protection
Can you elaborate on what sorts of concrete systems you think would be useful here? Whistle-blower protection is usually intra-organisational – is this what you have in mind here, or are you imagining something more pan-community?
actively fund critical work
This sounds great, but I think is probably quite hard to implement in practice in a way that seems appealing. A lot depends on the details. Can you elaborate on what sorts of concrete proposals you would endorse here?
For example, do you think OpenPhil should deliberately fund “red-team” work they disagree with, solely for the sake of community epistemics? If so, how should they go about doing that?
allow for bottom-up control over how funding is distributed
I think having ways to aggregate small-donor preferences regarding EA grantees is valuable. I don’t think it should replace large philanthropic donors with concentrated expertise. But I think I’d have a better opinion if I had a better idea of what you were advocating.
diversify academic fields represented in EA
This isn’t something you can just change by fiat. You could modify the core messages of EA to deliberately appeal to a wider variety of backgrounds, but that seems like it has a lot of important downsides. Again, I think I would need a better idea of what exactly you have in mind as interventions to really evaluate this.
, make the leaders’ forum and funding decisions transparent
These seem like two different cases. I’m generally pro public reporting of grants, but I don’t really know what you have in mind for the leaders’ forum (or other similar meetings).
stop glorifying individual thought-leaders
I’m guessing for more detail on this we should refer to the section on intelligence from your earlier post? I’m torn between sympathy and scepticism here, and don’t feel like I have much to add, so let’s move on to...
stop classifying everything as info hazards
OK, but how do you handle actual serious information hazards?
I’m on record in various places (e.g. here) saying that I think secrecy has lots of really serious downsides, and I still think these downsides are frequently underrated by many EAs. I certainly think that there is substantial progress still to be made in improving how we think about and deal with these problems. But that doesn’t make the core problem go away – sometimes information really is hazardous, in a fairly direct (though rarely straightforward) way.
While I appreciate that we’re all busy people with many other things to do than reply to Forum comments, I do think I would need clarification (and per-item argumentation) of the kind I outline above in order to take a long list of sweeping changes like this seriously, or to support attempts at their implementation.
Especially given the claim that “EA needs to make such structural adjustments in order to stay on the right side of history”.
I share the surprise and dismay other commenters have expressed about the experience you report around drafting this preprint. While I disagree with many claims and framings in the paper, I agreed with others, and the paper as a whole certainly doesn’t seem beyond the usual pale of academic dissent. I’m not sure what those who advised you not to publish were thinking.
In this comment, I’d like to turn attention to the recommendations you make at the end of this Forum post (rather than the preprint itself). Since these recommendations are somewhat more radical than those you make in the preprint, I think it would be worth discussing them specifically, particularly since in many cases it’s not clear to me exactly what is being proposed.
Having written what follows, I realise it’s quite repetitive, so if in responding you wanted to pick a few points that are particularly important to you and focus on those, I wouldn’t blame you!
You claim that EA needs to...
Can you explain what this would mean in practice? Most of the big funding bodies are private foundations. How would we go about breaking these up? Who even is “we” in this instance?
What sorts of funding sources do you think EA orgs should be seeking, other than EA orgs and individual philanthropists (noting that EA-adjacent academic researchers already have access to the government research funding apparatus)?
Speaking as a researcher who has spent a lot of time in academia, I think how much I care about work being “academically credible” depends a lot on the field. In many cases, I think post-publication review in places like the Forum is more robust and useful than pre-publication academic review.
Many academic fields (especially in the humanities) seem to have quite bad epistemic and political cultures, and even those that don’t often have very particular ideas of what sorts of problems & approaches are suitable for peer-reviewed articles (e.g. requiring that work be “interesting” or “novel” in particular ways). And the current peer-review system is well-known to be painfully inadequate in many ways.
I don’t want to overstate this – I think there are many cases where the academic publication route is a good option, for many reasons. But I’ve read a lot of pretty bad academic papers in my time, sometimes in prestigious journals, and it’s not all that rare for a Forum report to significantly exceed the quality of the academic literature. I don’t think academic credibility per se is something we should be aiming for for epistemic reasons. But perhaps you had other benefits in mind?
Can you elaborate on what sorts of concrete systems you think would be useful here? Whistle-blower protection is usually intra-organisational – is this what you have in mind here, or are you imagining something more pan-community?
This sounds great, but I think is probably quite hard to implement in practice in a way that seems appealing. A lot depends on the details. Can you elaborate on what sorts of concrete proposals you would endorse here?
For example, do you think OpenPhil should deliberately fund “red-team” work they disagree with, solely for the sake of community epistemics? If so, how should they go about doing that?
I think having ways to aggregate small-donor preferences regarding EA grantees is valuable. I don’t think it should replace large philanthropic donors with concentrated expertise. But I think I’d have a better opinion if I had a better idea of what you were advocating.
This isn’t something you can just change by fiat. You could modify the core messages of EA to deliberately appeal to a wider variety of backgrounds, but that seems like it has a lot of important downsides. Again, I think I would need a better idea of what exactly you have in mind as interventions to really evaluate this.
These seem like two different cases. I’m generally pro public reporting of grants, but I don’t really know what you have in mind for the leaders’ forum (or other similar meetings).
I’m guessing for more detail on this we should refer to the section on intelligence from your earlier post? I’m torn between sympathy and scepticism here, and don’t feel like I have much to add, so let’s move on to...
OK, but how do you handle actual serious information hazards?
I’m on record in various places (e.g. here) saying that I think secrecy has lots of really serious downsides, and I still think these downsides are frequently underrated by many EAs. I certainly think that there is substantial progress still to be made in improving how we think about and deal with these problems. But that doesn’t make the core problem go away – sometimes information really is hazardous, in a fairly direct (though rarely straightforward) way.
While I appreciate that we’re all busy people with many other things to do than reply to Forum comments, I do think I would need clarification (and per-item argumentation) of the kind I outline above in order to take a long list of sweeping changes like this seriously, or to support attempts at their implementation.
Especially given the claim that “EA needs to make such structural adjustments in order to stay on the right side of history”.