There are lots of of good points here. I could say more but here are just a few comments:
The obsession over intelligence is counterproductive. I worry a lot that EA is becoming too insular and that money and opportunities are being given out based largely on a perception of how intelligent people are and the degree to which people signal in-group status. The result is organizations like MIRI and Leverage staffed by autists that have burned through lots of money and human resources while only producing papers of marginal value. The fact they don’t even bother to get most papers peer reviewed is really bad. Yes, peer review sucks and is a lot of work, but every paper I had peer reviewed was improved by the process. Additionally, peer review and being able to publish in a good journal is a useful (although noisy) signal to outsiders and funders that your work is at least novel and not highly derivative.
The focus on intelligence can be very off-putting and I suspect is a reason many people are not involved in EA. I know one person who said they are not involved because they find it too intimidating. While I have not experienced problems at EA events, I have experienced a few times where people were either directly or indirectly questioning my intelligence at LessWrong events, and I found it off-putting. In one case, someone said “I’m trying to figure out how intelligent you are”. I remember times I had trouble keeping up with face-paced EA conversations. There’s been some conversations I’ve seen which appeared to be a bunch of people trying to impress and signal how intelligent they are rather than doing something constructive.
Age diversity is also an issue. Orgs that have similar values, like humanist orgs or skeptics orgs, have much greater age diversity. I think this is related to the focus on intelligence, especially superficial markers like verbosity and fluency/fast-talking, and the dismissal of skeptics and critics (people who are older tend to have developed a more critical/skeptical take on things due to greater life experience).
Old comment, so maybe this isn’t worth it, but: as someone diagnosed with Asperger’s as a kid, I’d really prefer if people didn’t attribute things you don’t like about people to their being autistic, in a causal manner and without providing supporting evidence. I don’t mean you can never be justified in saying that a group having a high prevalence of autism explains some negative feature of their behavior as a group. But I think care should be taken here, as when dealing with any minority.
I agree peer review is good, and people should not dismiss it, and too much speculation about how smart people are can be toxic. (I probably don’t avoid it as much as I should.) But that’s kind of part of my point, not all autists track some negative stereotype of cringe Silicon Valley people, even if like most stereotypes, there is a grain of truth in it.)
late to reply, but those are fair points, thanks for pointing that out. I do need to be more careful about attribution and stereotyping. The phenomena I see which I was trying to point at is that in the push to find “the most intelligent people” they end up selecting for autistic people, who in term select more autistic people. There’s also a self-selection thing going on—neurotypicals don’t find working with a team of autistic people very attractive, while autistic people do. Hence the lack of diversity.
Welcome to the forum! Apologies that the rest of my comment may seem overly critical/nitpicky.
While I agree with some other parts of your complaint, the implicit assumption behind
The fact they don’t even bother to get most papers peer reviewed is really bad. Yes, peer review sucks and is a lot of work, but every paper I had peer reviewed was improved by the process.
seems unlikely to be correct to me, at least on a naive interpretation. The implication here is that EA research orgs will be better if they tried to aim for academic publishing incentives. I think this is wrong because academic publishing incentives frequently make you prioritize bad things*. The problem isn’t an abstract issue of value-neutral “quality” but what you are allowed to think about/consider important.
As an example,
Additionally, peer review and being able to publish in a good journal is a useful (although noisy) signal to outsiders and funders that your work is at least novel and not highly derivative.
is indicative of one way in which publishing incentives may warp someone’s understanding, specifically constraining research quality to be primarily defined by “novelty” as understood by an academic field (as opposed to eg, truth, or decision-relevance, or novelty defined in a more reasonable way).
Holden Karnofsky’s interview might be relevant here, specifically the section on academia and the example of David Roodman’s research on criminal justice reform.
Holden Karnofsky [..] : recently when we were doing our Criminal Justice Reform work and we wanted to check ourselves. We wanted to check this basic assumption that it would be good to have less incarceration in the US.
David Roodman, who is basically the person that I consider the gold standard of a critical evidence reviewer, someone who can really dig on a complicated literature and come up with the answers, he did what, I think, was a really wonderful and really fascinating paper, which is up on our website, where he looked for all the studies on the relationship between incarceration and crime, and what happens if you cut incarceration, do you expect crime to rise, to fall, to stay the same? He picked them apart. What happened is he found a lot of the best, most prestigious studies and about half of them, he found fatal flaws in when he just tried to replicate them or redo their conclusions.
When he put it all together, he ended up with a different conclusion from what you would get if you just read the abstracts. It was a completely novel piece of work that reviewed this whole evidence base at a level of thoroughness that had never been done before, came out with a conclusion that was different from what you naively would have thought, which concluded his best estimate is that, at current margins, we could cut incarceration and there would be no expected impact on crime. He did all that. Then, he started submitting it to journals. It’s gotten rejected from a large number of journals by now. I mean starting with the most prestigious ones and then going to the less.
Robert Wiblin: Why is that?
Holden Karnofsky: Because his paper, it’s really, I think, it’s incredibly well done. It’s incredibly important, but there’s nothing in some sense, in some kind of academic taste sense, there’s nothing new in there. He took a bunch of studies. He redid them. He found that they broke. He found new issues with them, and he found new conclusions. From a policy maker or philanthropist perspective, all very interesting stuff, but did we really find a new method for asserting causality? Did we really find a new insight about how the mind of a …
Robert Wiblin: Criminal.
Holden Karnofsky: A perpetrator works. No. We didn’t advance the frontiers of knowledge. We pulled together a bunch of knowledge that we already had, and we synthesized it. I think that’s a common theme is that, I think, our academic institutions were set up a while ago. They were set up at a time when it seemed like the most valuable thing to do was just to search for the next big insight.
These days, they’ve been around for a while. We’ve got a lot of insights. We’ve got a lot of insights sitting around. We’ve got a lot of studies. I think a lot of the times what we need to do is take the information that’s already available, take the studies that already exist, and synthesize them critically and say, “What does this mean for what we should do? Where we should give money, what policy should be.”
I don’t think there’s any home in academia to do that. I think that creates a lot of the gaps. This also applies to AI timelines where it’s like there’s nothing particularly innovative, groundbreaking, knowledge frontier advancing, creative, clever about just … It’s a question that matters. When can we expect transformative AI and with what probability? It matters, but it’s not a work of frontier advancing intellectual creativity to try to answer it.
*Also society already has very large institutions working on academic publishing incentives called “universities,” so from a strategic diversification perspective we may not want to replicate them exactly.
There are lots of of good points here. I could say more but here are just a few comments:
The obsession over intelligence is counterproductive. I worry a lot that EA is becoming too insular and that money and opportunities are being given out based largely on a perception of how intelligent people are and the degree to which people signal in-group status. The result is organizations like MIRI and Leverage staffed by autists that have burned through lots of money and human resources while only producing papers of marginal value. The fact they don’t even bother to get most papers peer reviewed is really bad. Yes, peer review sucks and is a lot of work, but every paper I had peer reviewed was improved by the process. Additionally, peer review and being able to publish in a good journal is a useful (although noisy) signal to outsiders and funders that your work is at least novel and not highly derivative.
The focus on intelligence can be very off-putting and I suspect is a reason many people are not involved in EA. I know one person who said they are not involved because they find it too intimidating. While I have not experienced problems at EA events, I have experienced a few times where people were either directly or indirectly questioning my intelligence at LessWrong events, and I found it off-putting. In one case, someone said “I’m trying to figure out how intelligent you are”. I remember times I had trouble keeping up with face-paced EA conversations. There’s been some conversations I’ve seen which appeared to be a bunch of people trying to impress and signal how intelligent they are rather than doing something constructive.
Age diversity is also an issue. Orgs that have similar values, like humanist orgs or skeptics orgs, have much greater age diversity. I think this is related to the focus on intelligence, especially superficial markers like verbosity and fluency/fast-talking, and the dismissal of skeptics and critics (people who are older tend to have developed a more critical/skeptical take on things due to greater life experience).
Old comment, so maybe this isn’t worth it, but: as someone diagnosed with Asperger’s as a kid, I’d really prefer if people didn’t attribute things you don’t like about people to their being autistic, in a causal manner and without providing supporting evidence. I don’t mean you can never be justified in saying that a group having a high prevalence of autism explains some negative feature of their behavior as a group. But I think care should be taken here, as when dealing with any minority.
I agree peer review is good, and people should not dismiss it, and too much speculation about how smart people are can be toxic. (I probably don’t avoid it as much as I should.) But that’s kind of part of my point, not all autists track some negative stereotype of cringe Silicon Valley people, even if like most stereotypes, there is a grain of truth in it.)
late to reply, but those are fair points, thanks for pointing that out. I do need to be more careful about attribution and stereotyping. The phenomena I see which I was trying to point at is that in the push to find “the most intelligent people” they end up selecting for autistic people, who in term select more autistic people. There’s also a self-selection thing going on—neurotypicals don’t find working with a team of autistic people very attractive, while autistic people do. Hence the lack of diversity.
Thanks for responding.
Welcome to the forum! Apologies that the rest of my comment may seem overly critical/nitpicky.
While I agree with some other parts of your complaint, the implicit assumption behind
seems unlikely to be correct to me, at least on a naive interpretation. The implication here is that EA research orgs will be better if they tried to aim for academic publishing incentives. I think this is wrong because academic publishing incentives frequently make you prioritize bad things*. The problem isn’t an abstract issue of value-neutral “quality” but what you are allowed to think about/consider important.
As an example,
is indicative of one way in which publishing incentives may warp someone’s understanding, specifically constraining research quality to be primarily defined by “novelty” as understood by an academic field (as opposed to eg, truth, or decision-relevance, or novelty defined in a more reasonable way).
Holden Karnofsky’s interview might be relevant here, specifically the section on academia and the example of David Roodman’s research on criminal justice reform.
*Also society already has very large institutions working on academic publishing incentives called “universities,” so from a strategic diversification perspective we may not want to replicate them exactly.