Idk, academia doesn’t care about the things we care about, and as a result it is hard to publish there. It seems like long-term we want to make a branch of academia that cares about what we care about, but before that it seems pretty bad to subject yourself to peer reviews that argue that your work is useless because they don’t care about the future, and/or to rewrite your paper so that regular academics understand it whereas other EAs who actually care about it don’t. (I think this is the situation of AI safety.)
It seems like an overstatement that the topics of EA are completely disjoint with topics of interest to various established academic disciplines. I do agree that many of the intellectual and methodological approaches are still very uncommon in academia.
It is not hard to imagine ideas from EA (and also the rationality community) becoming a well-recognized part of some branches of mainstream academia. And this would be extremely valuable, because it would unlock resources (both monetary and intellectual) that go far beyond anything that is currently available.
And because of this, it is unfortunate that there is so little effort of establishing EA thinking in academia, especially since it is not *that* hard:
In addition to posting articles directly into a forum, consider that post a pre-print and take the extra mile to also submit as a research paper or commentary in a peer-reviewed open-access journal. This way, you gain additional readers from outside the core EA group, and you make it easier to cite your work as a reputable source.
Note that this also makes it easier to write grant proposals about EA-related topics. Writing a proposal right now I have the feeling that 50% of my citations would be of blog posts, which feels like a disadvantage
Also note that this increases the pool of EA-friendly reviewers for future papers and grant proposals. Reviewers are often picked from the pool of people who are cited by an article or grant under review, or pop up in related literature searches. If most of the relevant literature is locked into blog posts, this system does not work.
It seems like an overstatement that the topics of EA are completely disjoint with topics of interest to various established academic disciplines.
I didn’t mean to say this, there’s certainly overlap. My claim is that (at least in AI safety, and I would guess in other EA areas as well) the reasons we do the research we do are different from those of most academics. It’s certainly possible to repackage the research in a format more suited to academia—but it must be repackaged, which leads to
rewrite your paper so that regular academics understand it whereas other EAs who actually care about it don’t
I agree that the things you list have a lot of benefits, but they seem quite hard to me to do. I do still think publishing with peer review is worth it despite the difficulty.
It seems like an overstatement that the topics of EA are completely disjoint with topics of interest to various established academic disciplines. I do agree that many of the intellectual and methodological approaches are still very uncommon in academia.
It is not hard to imagine ideas from EA (and also the rationality community) becoming a well-recognized part of some branches of mainstream academia. And this would be extremely valuable, because it would unlock resources (both monetary and intellectual) that go far beyond anything that is currently available.
And because of this, it is unfortunate that there is so little effort of establishing EA thinking in academia, especially since it is not *that* hard:
In addition to posting articles directly into a forum, consider that post a pre-print and take the extra mile to also submit as a research paper or commentary in a peer-reviewed open-access journal. This way, you gain additional readers from outside the core EA group, and you make it easier to cite your work as a reputable source.
Note that this also makes it easier to write grant proposals about EA-related topics. Writing a proposal right now I have the feeling that 50% of my citations would be of blog posts, which feels like a disadvantage
Also note that this increases the pool of EA-friendly reviewers for future papers and grant proposals. Reviewers are often picked from the pool of people who are cited by an article or grant under review, or pop up in related literature searches. If most of the relevant literature is locked into blog posts, this system does not work.
Organize scientific conferences
Form an academic society / association
etc
I didn’t mean to say this, there’s certainly overlap. My claim is that (at least in AI safety, and I would guess in other EA areas as well) the reasons we do the research we do are different from those of most academics. It’s certainly possible to repackage the research in a format more suited to academia—but it must be repackaged, which leads to
I agree that the things you list have a lot of benefits, but they seem quite hard to me to do. I do still think publishing with peer review is worth it despite the difficulty.