On the abstract level, I think I see EA as less grand / ambitious than you do (in practice, if not in theory) -- the biggest focus of the longtermist community is reducing x-risk, which is good by basically any ethical theory that people subscribe to (exceptions being negative utilitarianism and nihilism, but nihilism cares about nothing and very few people are negative utilitarian and most of those people seem to be EAs). So I see the longtermist section of EA more as the “interest group” in humanity that advocates for the future, as opposed to one that’s going to determine what will and won’t happen in the future. I agree that if we were going to determine the entire future of humanity, we would want to be way more diverse than we are now. But if we’re more like an interest group, efficiency seems good.
On the concrete level—you mention not being happy about these things:
EAs give high credence to non-expert investigations written by their peers
Agreed this happens and is bad
they rarely publish in peer-review journals and become increasingly dismissive of academia
Idk, academia doesn’t care about the things we care about, and as a result it is hard to publish there. It seems like long-term we want to make a branch of academia that cares about what we care about, but before that it seems pretty bad to subject yourself to peer reviews that argue that your work is useless because they don’t care about the future, and/or to rewrite your paper so that regular academics understand it whereas other EAs who actually care about it don’t. (I think this is the situation of AI safety.)
show an increasingly certain and judgmental stance towards projects they deem ineffective
Agreed this happens and is bad (though you should get more certain as you get more evidence, so maybe I think it’s less bad than you do)
defer to EA leaders as epistemic superiors without verifying the leaders epistemic superiority
Agreed this happens and is bad
trust that secret google documents which are circulated between leaders contain the information that justifies EA’s priorities and talent allocation
Agreed this would be bad if it happened, I’m not actually sure that people trust this? I do hear comments like “maybe it was in one of those secret google docs” but I wouldn’t really say that those people trust that process.
let central institutions recommend where to donate and follow advice to donate to central EA organisations
Kinda bad, but I think this is more a fact about “regular” EAs not wanting to think about where to donate? (Or maybe they have more trust in central institutions than they “should”.)
let individuals move from a donating institution to a recipient institution and visa versa
Seems really hard to prevent this—my understanding is it happens in all fields, because expertise is rare and in high demand. I agree that it’s a bad thing, but it seems worse to ban it.
strategically channel EAs into the US government
I don’t see why this is bad. I think it might be bad if other interest groups didn’t do this, but they do. (Though I might just be totally wrong about that.)
adjust probability assessments of extreme events to include extreme predictions because they were predictions by other members
That seems somewhat bad but not obviously so? Like, it seems like you want to predict an average of people’s opinions weighted by expertise; since EA cares a lot more about x-risk it often is the case that EAs are the experts on extreme events.
Idk, academia doesn’t care about the things we care about, and as a result it is hard to publish there. It seems like long-term we want to make a branch of academia that cares about what we care about, but before that it seems pretty bad to subject yourself to peer reviews that argue that your work is useless because they don’t care about the future, and/or to rewrite your paper so that regular academics understand it whereas other EAs who actually care about it don’t. (I think this is the situation of AI safety.)
It seems like an overstatement that the topics of EA are completely disjoint with topics of interest to various established academic disciplines. I do agree that many of the intellectual and methodological approaches are still very uncommon in academia.
It is not hard to imagine ideas from EA (and also the rationality community) becoming a well-recognized part of some branches of mainstream academia. And this would be extremely valuable, because it would unlock resources (both monetary and intellectual) that go far beyond anything that is currently available.
And because of this, it is unfortunate that there is so little effort of establishing EA thinking in academia, especially since it is not *that* hard:
In addition to posting articles directly into a forum, consider that post a pre-print and take the extra mile to also submit as a research paper or commentary in a peer-reviewed open-access journal. This way, you gain additional readers from outside the core EA group, and you make it easier to cite your work as a reputable source.
Note that this also makes it easier to write grant proposals about EA-related topics. Writing a proposal right now I have the feeling that 50% of my citations would be of blog posts, which feels like a disadvantage
Also note that this increases the pool of EA-friendly reviewers for future papers and grant proposals. Reviewers are often picked from the pool of people who are cited by an article or grant under review, or pop up in related literature searches. If most of the relevant literature is locked into blog posts, this system does not work.
It seems like an overstatement that the topics of EA are completely disjoint with topics of interest to various established academic disciplines.
I didn’t mean to say this, there’s certainly overlap. My claim is that (at least in AI safety, and I would guess in other EA areas as well) the reasons we do the research we do are different from those of most academics. It’s certainly possible to repackage the research in a format more suited to academia—but it must be repackaged, which leads to
rewrite your paper so that regular academics understand it whereas other EAs who actually care about it don’t
I agree that the things you list have a lot of benefits, but they seem quite hard to me to do. I do still think publishing with peer review is worth it despite the difficulty.
Agreed this would be bad if it happened, I’m not actually sure that people trust this? I do hear comments like “maybe it was in one of those secret google docs” but I wouldn’t really say that those people trust that process.
FWIW, I feel like I’ve heard a fair amount of comments suggesting that people basically trust the process. Though maybe it became a bit less frequent over time. Most of this was about very large documents on AI safety and strategy issues allegedly existing within OpenAI and MIRI.
I’m glad when things do get published. E.g. Eric Drexler’s Reframing Superintelligence used to be a collection of Google docs.
But I find it hard to say to what extent non-published Google docs are suboptimal, i.e. worse than alternatives. E.g. to some extent it does seem correct that I give a bit more weight to someone’s view on, say, AI timelines, if I hear that they’ve thought about it that much that they were able to write a 200-page document about it. Similarly, there can be good reasons not to publish documents—either because they contain information hazards (though I think that outside of bio many EAs are way too worried about this, and overestimate the effects marginal publication by non-prominent researchers can have on the world) or because the author can use their time better than to make these docs publishable.
My best guess is that the status quo is significantly suboptimal, and could be improved. But that is based on fairly generic a priori considerations (e.g. “people tend to be more worried about their ‘reputation’ than warranted and so tend to be too reluctant to publish non-polished documents”) I could easily be wrong about. In some sense, the fact that the whole process is that intransparent, and so hard to ascertain how good it is from the outside, is the biggest problem.
It also means that trust in the everyday sense really plays an important role, which means that people outside EA circles who don’t have independent reasons to trust the involved people (e.g. because of social/personal ties or independent work relationships) won’t give as much epistemic weight to it, and they will largely be correct in doing so. I.e. perhaps the main cost is not to epistemic coordination within EA, but rather to EA’s ability to convince skeptical ‘outsiders’.
Most of this was about very large documents on AI safety and strategy issues allegedly existing within OpenAI and MIRI.
I agree people trust MIRI’s conclusions a bunch based on supposed good internal reasoning / the fact that they are smart, and I think this is bad. However, I think this is pretty limited to MIRI.
I haven’t seen anything similar with OpenAI though of course it is possible.
This is a good post, I’m glad you wrote it :)
On the abstract level, I think I see EA as less grand / ambitious than you do (in practice, if not in theory) -- the biggest focus of the longtermist community is reducing x-risk, which is good by basically any ethical theory that people subscribe to (exceptions being negative utilitarianism and nihilism, but nihilism cares about nothing and very few people are negative utilitarian and most of those people seem to be EAs). So I see the longtermist section of EA more as the “interest group” in humanity that advocates for the future, as opposed to one that’s going to determine what will and won’t happen in the future. I agree that if we were going to determine the entire future of humanity, we would want to be way more diverse than we are now. But if we’re more like an interest group, efficiency seems good.
On the concrete level—you mention not being happy about these things:
Agreed this happens and is bad
Idk, academia doesn’t care about the things we care about, and as a result it is hard to publish there. It seems like long-term we want to make a branch of academia that cares about what we care about, but before that it seems pretty bad to subject yourself to peer reviews that argue that your work is useless because they don’t care about the future, and/or to rewrite your paper so that regular academics understand it whereas other EAs who actually care about it don’t. (I think this is the situation of AI safety.)
Agreed this happens and is bad (though you should get more certain as you get more evidence, so maybe I think it’s less bad than you do)
Agreed this happens and is bad
Agreed this would be bad if it happened, I’m not actually sure that people trust this? I do hear comments like “maybe it was in one of those secret google docs” but I wouldn’t really say that those people trust that process.
Kinda bad, but I think this is more a fact about “regular” EAs not wanting to think about where to donate? (Or maybe they have more trust in central institutions than they “should”.)
Seems really hard to prevent this—my understanding is it happens in all fields, because expertise is rare and in high demand. I agree that it’s a bad thing, but it seems worse to ban it.
I don’t see why this is bad. I think it might be bad if other interest groups didn’t do this, but they do. (Though I might just be totally wrong about that.)
That seems somewhat bad but not obviously so? Like, it seems like you want to predict an average of people’s opinions weighted by expertise; since EA cares a lot more about x-risk it often is the case that EAs are the experts on extreme events.
It seems like an overstatement that the topics of EA are completely disjoint with topics of interest to various established academic disciplines. I do agree that many of the intellectual and methodological approaches are still very uncommon in academia.
It is not hard to imagine ideas from EA (and also the rationality community) becoming a well-recognized part of some branches of mainstream academia. And this would be extremely valuable, because it would unlock resources (both monetary and intellectual) that go far beyond anything that is currently available.
And because of this, it is unfortunate that there is so little effort of establishing EA thinking in academia, especially since it is not *that* hard:
In addition to posting articles directly into a forum, consider that post a pre-print and take the extra mile to also submit as a research paper or commentary in a peer-reviewed open-access journal. This way, you gain additional readers from outside the core EA group, and you make it easier to cite your work as a reputable source.
Note that this also makes it easier to write grant proposals about EA-related topics. Writing a proposal right now I have the feeling that 50% of my citations would be of blog posts, which feels like a disadvantage
Also note that this increases the pool of EA-friendly reviewers for future papers and grant proposals. Reviewers are often picked from the pool of people who are cited by an article or grant under review, or pop up in related literature searches. If most of the relevant literature is locked into blog posts, this system does not work.
Organize scientific conferences
Form an academic society / association
etc
I didn’t mean to say this, there’s certainly overlap. My claim is that (at least in AI safety, and I would guess in other EA areas as well) the reasons we do the research we do are different from those of most academics. It’s certainly possible to repackage the research in a format more suited to academia—but it must be repackaged, which leads to
I agree that the things you list have a lot of benefits, but they seem quite hard to me to do. I do still think publishing with peer review is worth it despite the difficulty.
FWIW, I feel like I’ve heard a fair amount of comments suggesting that people basically trust the process. Though maybe it became a bit less frequent over time. Most of this was about very large documents on AI safety and strategy issues allegedly existing within OpenAI and MIRI.
I’m glad when things do get published. E.g. Eric Drexler’s Reframing Superintelligence used to be a collection of Google docs.
But I find it hard to say to what extent non-published Google docs are suboptimal, i.e. worse than alternatives. E.g. to some extent it does seem correct that I give a bit more weight to someone’s view on, say, AI timelines, if I hear that they’ve thought about it that much that they were able to write a 200-page document about it. Similarly, there can be good reasons not to publish documents—either because they contain information hazards (though I think that outside of bio many EAs are way too worried about this, and overestimate the effects marginal publication by non-prominent researchers can have on the world) or because the author can use their time better than to make these docs publishable.
My best guess is that the status quo is significantly suboptimal, and could be improved. But that is based on fairly generic a priori considerations (e.g. “people tend to be more worried about their ‘reputation’ than warranted and so tend to be too reluctant to publish non-polished documents”) I could easily be wrong about. In some sense, the fact that the whole process is that intransparent, and so hard to ascertain how good it is from the outside, is the biggest problem.
It also means that trust in the everyday sense really plays an important role, which means that people outside EA circles who don’t have independent reasons to trust the involved people (e.g. because of social/personal ties or independent work relationships) won’t give as much epistemic weight to it, and they will largely be correct in doing so. I.e. perhaps the main cost is not to epistemic coordination within EA, but rather to EA’s ability to convince skeptical ‘outsiders’.
I agree people trust MIRI’s conclusions a bunch based on supposed good internal reasoning / the fact that they are smart, and I think this is bad. However, I think this is pretty limited to MIRI.
I haven’t seen anything similar with OpenAI though of course it is possible.
I agree with all the other things you write.