This comment is pretty long, but TLDR: peer review and academia have their own problems, some similar to EA, some not. Maybe a hybrid approach works, and maybe we should consult with people with expertise in social organisation of science.
To some extent I agree with this. Whilst I’ve been wanting more academic rigour in X-Risk for a while, peer review certainly is no perfect panacea, although I think it is probably better than the current culture of deferring to blog posts as much as we do.
I think you are right that traditional academia really has its problems, and name recognition is also still an issue (eg Nobel Prize winners are 70% more likely to get through peer review etc.). Nonetheless, certainly from the field (solar geoengineering) that I have been in, name recognition and agreement with the ‘thought leaders’ is definitely less incentivised than in EA.
One potential response is to think a balance between peer review, the current EA culture and commissioned reports is a good balance. We could set up an X-Risk journal with some editors and reviewers who are a) dedicated to pluralism and b) will publish things that are methodologically sound irrespective of result. Alternatively, a sort of open peer review system where pre-prints are published publically, with reviewers comments and then responses to these as well. However, for major decisions we could rely on reports written and invetigated by a number of people. Open Phil have done this to an extent, but having broader panels etc. to do these reports may be much more useful. Certainly its something to try.
I do think its really difficult though, but I think the current EA status quo is not working. Perhaps EA consulting with some thinkers on the social organisation of science to better design how we do these things may be good, as there certainly are people with this expertise.
And it is definitely possible to commission structured expert elicitations, and is possible to directly fund specific bits of research.
Moreover, another thing about peer review is that it can sometimes be pretty important for policy. This is certainly the case for the climate change space, where you won’t be incorporated into UN decision making and IPCC reports unless your peer reviewed.
Finally, I think your points about the ‘agreed upon methods’ sort of thing is really good, and this is something I’m trying to work on in XRisk. I talk about this a little in my ‘Beyond Simple Existential Risk’ talk, and am writing a paper with Anders Sandberg, SJ Beard and Adrian Currie on this at present. I’d be keen to hear your thoughts on this if your interested!
Rethink Priorities occasionally pays non-EA subject matter experts (usually academics that are formally recognized by other academics as the relevant and authoritative subject-matter experts, but not always) to review some of our work. I think this is a good way of creating a peer review process without having to publish formally in journals. Though Rethink Priorities also occasionally publishes formally in journals.
Maybe more orgs should try to do that? (I think Open Phil and GiveWell do this as well.)
Finally, I think your points about the ‘agreed upon methods’ sort of thing is really good,
Since you liked that though let me think out loud a bit more.
I think it’s practically impossible to be rigorous without a paradigm.
Old sciences have paradigms and mostly work well but the culture is not nice to people trying to form ideas outside the paradigm, because that is necessarily less rigours. I remember some academic complaining on this on a podcast. They where doing some different approach within cognitive science and had problem with pear review because they where not enough focused on measuring the standard things.
On the other had there is EA/LW style AI Safety research, where everyone talks abut how preparadigmatic we are. Vague speculative ideas, with out inferential depth, get more appreciation and attention. By now there are a few paradigms, the clearest case being Vanessas research, which almost no one understand. I think part of the reason her work is hard to undertand is exactly because it is rigours within a paradigm research. It’s specific proof with in a specific framework. It has both more details and more prerequisites. While reading pre paradigmatic blogposts is like reading the first intro chapter in a text book (which is always less technical), the with in paradigmatic stuff is more like reading chapter 11, and you really have to have read the previous chapters, which makes it less accessible. Especially since no one collected the previous chapters for you, and the person writing it is not selected for their pedagogical skills.
Research has to start as pre paradigmatic. But I think that the dynamic described above makes it hard to move on, to pick some paradigm to explore and start working out the details. Maybe a field at some point needs to develop a culture of looking down at less rigours work, for any rigours work to really take hold? I’m really not sure. And I don’t want to loose the explorative part of EA/LW style AI Safety research either. Possibly rigour will just develop naturally over time?
I think this is pretty interesting and thanks for sharing your thoughys! There’s things here I agree with, things I disagree with, and I might say more when I’m on my computer not phone!. However, I’d love to call about this to talk more, and see
This comment is pretty long, but TLDR: peer review and academia have their own problems, some similar to EA, some not. Maybe a hybrid approach works, and maybe we should consult with people with expertise in social organisation of science.
To some extent I agree with this. Whilst I’ve been wanting more academic rigour in X-Risk for a while, peer review certainly is no perfect panacea, although I think it is probably better than the current culture of deferring to blog posts as much as we do.
I think you are right that traditional academia really has its problems, and name recognition is also still an issue (eg Nobel Prize winners are 70% more likely to get through peer review etc.). Nonetheless, certainly from the field (solar geoengineering) that I have been in, name recognition and agreement with the ‘thought leaders’ is definitely less incentivised than in EA.
One potential response is to think a balance between peer review, the current EA culture and commissioned reports is a good balance. We could set up an X-Risk journal with some editors and reviewers who are a) dedicated to pluralism and b) will publish things that are methodologically sound irrespective of result. Alternatively, a sort of open peer review system where pre-prints are published publically, with reviewers comments and then responses to these as well. However, for major decisions we could rely on reports written and invetigated by a number of people. Open Phil have done this to an extent, but having broader panels etc. to do these reports may be much more useful. Certainly its something to try.
I do think its really difficult though, but I think the current EA status quo is not working. Perhaps EA consulting with some thinkers on the social organisation of science to better design how we do these things may be good, as there certainly are people with this expertise.
And it is definitely possible to commission structured expert elicitations, and is possible to directly fund specific bits of research.
Moreover, another thing about peer review is that it can sometimes be pretty important for policy. This is certainly the case for the climate change space, where you won’t be incorporated into UN decision making and IPCC reports unless your peer reviewed.
Finally, I think your points about the ‘agreed upon methods’ sort of thing is really good, and this is something I’m trying to work on in XRisk. I talk about this a little in my ‘Beyond Simple Existential Risk’ talk, and am writing a paper with Anders Sandberg, SJ Beard and Adrian Currie on this at present. I’d be keen to hear your thoughts on this if your interested!
Rethink Priorities occasionally pays non-EA subject matter experts (usually academics that are formally recognized by other academics as the relevant and authoritative subject-matter experts, but not always) to review some of our work. I think this is a good way of creating a peer review process without having to publish formally in journals. Though Rethink Priorities also occasionally publishes formally in journals.
Maybe more orgs should try to do that? (I think Open Phil and GiveWell do this as well.)
Since you liked that though let me think out loud a bit more.
I think it’s practically impossible to be rigorous without a paradigm.
Old sciences have paradigms and mostly work well but the culture is not nice to people trying to form ideas outside the paradigm, because that is necessarily less rigours. I remember some academic complaining on this on a podcast. They where doing some different approach within cognitive science and had problem with pear review because they where not enough focused on measuring the standard things.
On the other had there is EA/LW style AI Safety research, where everyone talks abut how preparadigmatic we are. Vague speculative ideas, with out inferential depth, get more appreciation and attention. By now there are a few paradigms, the clearest case being Vanessas research, which almost no one understand. I think part of the reason her work is hard to undertand is exactly because it is rigours within a paradigm research. It’s specific proof with in a specific framework. It has both more details and more prerequisites. While reading pre paradigmatic blogposts is like reading the first intro chapter in a text book (which is always less technical), the with in paradigmatic stuff is more like reading chapter 11, and you really have to have read the previous chapters, which makes it less accessible. Especially since no one collected the previous chapters for you, and the person writing it is not selected for their pedagogical skills.
Research has to start as pre paradigmatic. But I think that the dynamic described above makes it hard to move on, to pick some paradigm to explore and start working out the details. Maybe a field at some point needs to develop a culture of looking down at less rigours work, for any rigours work to really take hold? I’m really not sure. And I don’t want to loose the explorative part of EA/LW style AI Safety research either. Possibly rigour will just develop naturally over time?
End of speculation
I think this is pretty interesting and thanks for sharing your thoughys! There’s things here I agree with, things I disagree with, and I might say more when I’m on my computer not phone!. However, I’d love to call about this to talk more, and see
Is there a recording?
I’m always happy to offer my opinions.
here’s my email: linda.linsefors@gmail.com
There is, it should be on the cea youtube channel at some point. It is also a forum post:https://forum.effectivealtruism.org/posts/cXH2sG3taM5hKbiva/beyond-simple-existential-risk-survival-in-a-complex#:~:text=It sees the future as,perhaps at least as important.