Proposal: alternative to traditional academic journals for EA-relevant research (multi-link post)

Caveats: This is mainly a linkpost; the linked posts make the arguments more carefully. Some content comes from my comments on other posts.

I realize I may need to make a stronger case for ‘why we should care about academic research and bringing academic feedback and credibility to EA research’. If this doesn’t seem apparent, you might consider the below as “to the extent that EA-aligned researchers are seeking this, here’s a proposal for how to do it better”.

Update Feb/​March 2022: LTFF funding received, progress being reported in the collaborative gitbook space—I will make an updated backlink post soon.

We can help ‘slay the journals’, make research better, and nudge academics towards considering EA-relevant issues

Lauren’s post on bringing EA ideas into large research organizations, reminded me that EA organizations and researchers can “make research better, and in doing so, bring academics into our fold”. Our shared principles and values, and our lack of ties to traditional systems can help us break the collective-action problems.

EA researchers and orgs need an alternative to traditional journals

In the Slaying the journals discussion I argue:

Global priorities and EA research organizations are looking for ‘feedback and quality control’, dissemination, and external credibility. We would gain substantial benefits from supporting, and working with [journal-independent peer-evaluation systems], rather than (only) submitting our work to traditional journals. We should also put some direct value on results of open science and open access, and the strong impact we may have in supporting this.

I am eager for us to take concrete steps towards an alternative to ‘traditional academic journals’ process. As I argue in the ‘unjournal’ link, the traditional model

  • lets publishers extract rents and makes research less accessible,

  • inhibits innovation and open science practices (especially dynamic docs),

  • (most substantially) leads to tremendous wasted effort and risk, as it encourages researchers to focus on gamesmanship and often requires us to submit papers to a long process sequence of journals with 01 outcomes.

“Plan of action”, crucial steps

I set up a space where I propose a Plan of Action HERE in the Gitbook format. I would appreciate your feedback and suggestions.

I think the crucial steps are

  1. Set up an “experimental space” e.g., on PREreview allowing us to include additional, more quantitative metrics (they have offered this as a possibility), and to focus on content and approaches that are relevant to EA and global priorities.

  2. Most crucially: Get funding and support and commitments (from GPI, RP, etc)

  • … for people to do reviewing, rating, and feedback activities in our space in PREreview

  • … for ‘editorial’ people to oversee which research projects are relevant and assign relevant reviewers

  1. Link arms with Cooper Smout and the “Free our Knowledge” pledges and initiatives like this one as much as possible. Note that this is very close to Cooper’s mission, and he has time funded/​allotted for this.

Asides and caveats

My ‘rated list of tools and partners’

In this Airtable view I give my rough opinion about the value of existing outlets including innovative OA journals, places to host preprints and research projects, and, most importantly IMO, journal-independent peer review and rating tools.

Do we need an actual OA journal?

I don’t think setting up an OA journal with an impact factor is necessary. I think “credible quantitative peer review” is enough, and in fact the best mode. But I am also supportive of open-access journals with good feedback/​rating models like SciPost. It might be nice to have an EA-relevant place like this. Cooper Smout is more enthusiastic about the idea of starting best-practice OA journals that ‘give every acceptable paper a rating’ … see our discussion after my post here.

I recognize that open-access/​open science in some fields can raise X-risks

We give a rough outline of the arguments here. But I think its pretty in most cases whether or not this is relevant.

To be a bit glib… Microbiology of diseases: Yes. General AI: Probably. Development Economics: No. Psychology: No.

This is a ‘small but big’ step

My proposal may not ‘fix the biggest problems of research alignment and productivity’ (see, e.g., discussions here and here nor make a tremendous contribution to humanity.

But it would make research somewhat more efficient, transparent, and accessible. It would make the researchers’ careers less stressful and less random. They would appreciate us for that.

And it would help EA-aligned researchers and organizations do better, more credible research.