EA is still in the “group of friends just trying stuff” mindset even though it has far too much power at this point to be operating that way. And so we don’t always have the level of scrutiny or accountability that we should, because people who have been involved since 2014 are like, “Oh him? That’s just Bob. Everyone knows Bob.” But actually we don’t all know Bob anymore because there are like 10,000 of us now, not 100. The rest of Will’s post is about EA can be better about this, and I thought it was pretty good.
Something about this jarrs with me and I don’t know what.
I know what Kirsten means—it does feel like “friends doing stuff” compared to the way some other big movements are run. I didn’t read it as being jarring and I don’t think it was intended as a massive criticism.
BUT “friends doing stuff” is good. We need to be trying stuff. And friends who know and trust each other and have a network and knowledge and understanding and who talk to each other and come up with ideas of things to try and actually try them: that is great. That is what so many large R&D organisations dream of but can never achieve, because they get stuck in formal structures and rigid policies. The EA movement is still very young, we need this mentality.
The alternative would seem to be to make trying new things harder. I’m not convinced that would be helpful.
The middle ground is probably where at a certain point in scaling up ideas (e.g. based on spend) there could be more scrutiny.
HOWEVER, where I don’t necessarily agree with Kirsten (based on my very limited experience) is on the questions of scrutiny or accountability. Having spent my career outside the EA environment, I can honestly say I have never before seen a group of people who more actively seek scrutiny, put there ideas out there and ask people to shoot at them.
I see organisations putting their research or action plans on here and saying “guys, this is what we plan to do—before we start, please tell us anything that you disagree with” and then engaging actively and constructively with all the feedback.
Maybe there are some formal accountability structures missing (because many organisations are like start-ups rather than big companies) - but I don’t think you want that to start too early. I can’t really comment on this, but I would imagine that most organisations would have some kind of review before investing a lot of money in scaling an idea—but might be happy to give someone $1000 and a few weeks to go and try something.
I think there is also something vaguely similar to the Matthew effect that goes on. I’m not particularly confident in this, and I’m not sure that I fully endorse it.
People who got involved X years ago have gotten a network, specialized skills, and knowledge that is unavailable to others (or at least is harder for others to get). They had the ‘first mover advantage.’ Over years of attending many conferences (including those that some people get automatic or nearly-automatic admissions to as a result of their employer), retreats, building a reputation, and just random “hallway conversations” an initially small gap has become quite wide.
The simplest example in my mind is to imagine someone who had been involved in EA since 2014-2019 recommending that you upskill by taking a workshop from CFAR, or that you attend an EAG. But CFAR doesn’t offer workshops anymore, so that option for training/upskilling/networking is literally not available anymore, and you likely won’t get accepted to EAG if you don’t look impressive enough (according to particular criteria).
you likely won’t get accepted to EAG if you don’t look impressive enough
While I think this sort of true, reading the linked article might give you the impression the bar is much higher than it is?
I know many people who’ve recently been accepted to EA conferences with much less impressive or EA-relevant backgrounds. If it were just this I would say that it’s hard to make a perfect process and there will always be some false positives and false negatives.
But:
There’s something important missing from their description of their experience. They wrote, responding Amy, the head of the CEA Events team, “from our conversation, I came to understand that there is a distinct reason that could be pointed to for my rejection from EAG” but then they don’t disclose that reason and, citing privacy, neither will Amy.
Yeah, admissions is complicated. And writing “you likely won’t get accepted to EAG if you don’t look impressive enough” is a vast simplification. In reality I imagine that it is some nebulous combination of traditional impressiveness, EA-specific impressiveness, and potential future contribution (all from the perspective of the admissions team). But like many things in life, I’m guessing that the decisions often come down to judgement calls, rather than strict and clear decision tree.
In a vague parallel to university admissions, there isn’t a simply standard or algorithm (such as “a function of high school grades and standardized test scores”), and instead it is really a judgement call for each individual applicant. In another parallel to university admissions, sometimes the star trombone player is graduating and the school really needs a good trombone player. I imagine similarly, there are priorities for EA conferences that aren’t transparent/visible to the public: maybe the person doing X will be resigning soon, so there is a big push to nurture more talent doing X to find a replacement.
Something about this jarrs with me and I don’t know what.
This is a very interesting comment and reaction.
I know what Kirsten means—it does feel like “friends doing stuff” compared to the way some other big movements are run. I didn’t read it as being jarring and I don’t think it was intended as a massive criticism.
BUT “friends doing stuff” is good. We need to be trying stuff. And friends who know and trust each other and have a network and knowledge and understanding and who talk to each other and come up with ideas of things to try and actually try them: that is great. That is what so many large R&D organisations dream of but can never achieve, because they get stuck in formal structures and rigid policies. The EA movement is still very young, we need this mentality.
The alternative would seem to be to make trying new things harder. I’m not convinced that would be helpful.
The middle ground is probably where at a certain point in scaling up ideas (e.g. based on spend) there could be more scrutiny.
HOWEVER, where I don’t necessarily agree with Kirsten (based on my very limited experience) is on the questions of scrutiny or accountability. Having spent my career outside the EA environment, I can honestly say I have never before seen a group of people who more actively seek scrutiny, put there ideas out there and ask people to shoot at them.
I see organisations putting their research or action plans on here and saying “guys, this is what we plan to do—before we start, please tell us anything that you disagree with” and then engaging actively and constructively with all the feedback.
Maybe there are some formal accountability structures missing (because many organisations are like start-ups rather than big companies) - but I don’t think you want that to start too early. I can’t really comment on this, but I would imagine that most organisations would have some kind of review before investing a lot of money in scaling an idea—but might be happy to give someone $1000 and a few weeks to go and try something.
The way I’ve written it doesn’t sit right, or the dynamic itself?
I think your representation doesn’t seem correct to me, though I can’t figure out how it’s wrong.
Maybe:
It seems like these are talented people with a lot of experience
It is not surprising they have often been in the movement for a while—trust takes time to build—hence there are fewer candidates
I would like better elites but it’s not clear to me how we get them, the process of choosing people who have power is just really difficult.
I think there is also something vaguely similar to the Matthew effect that goes on. I’m not particularly confident in this, and I’m not sure that I fully endorse it.
People who got involved X years ago have gotten a network, specialized skills, and knowledge that is unavailable to others (or at least is harder for others to get). They had the ‘first mover advantage.’ Over years of attending many conferences (including those that some people get automatic or nearly-automatic admissions to as a result of their employer), retreats, building a reputation, and just random “hallway conversations” an initially small gap has become quite wide.
The simplest example in my mind is to imagine someone who had been involved in EA since 2014-2019 recommending that you upskill by taking a workshop from CFAR, or that you attend an EAG. But CFAR doesn’t offer workshops anymore, so that option for training/upskilling/networking is literally not available anymore, and you likely won’t get accepted to EAG if you don’t look impressive enough (according to particular criteria).
While I think this sort of true, reading the linked article might give you the impression the bar is much higher than it is?
I know many people who’ve recently been accepted to EA conferences with much less impressive or EA-relevant backgrounds. If it were just this I would say that it’s hard to make a perfect process and there will always be some false positives and false negatives.
But:
There’s something important missing from their description of their experience. They wrote, responding Amy, the head of the CEA Events team, “from our conversation, I came to understand that there is a distinct reason that could be pointed to for my rejection from EAG” but then they don’t disclose that reason and, citing privacy, neither will Amy.
Yeah, admissions is complicated. And writing “you likely won’t get accepted to EAG if you don’t look impressive enough” is a vast simplification. In reality I imagine that it is some nebulous combination of traditional impressiveness, EA-specific impressiveness, and potential future contribution (all from the perspective of the admissions team). But like many things in life, I’m guessing that the decisions often come down to judgement calls, rather than strict and clear decision tree.
In a vague parallel to university admissions, there isn’t a simply standard or algorithm (such as “a function of high school grades and standardized test scores”), and instead it is really a judgement call for each individual applicant. In another parallel to university admissions, sometimes the star trombone player is graduating and the school really needs a good trombone player. I imagine similarly, there are priorities for EA conferences that aren’t transparent/visible to the public: maybe the person doing X will be resigning soon, so there is a big push to nurture more talent doing X to find a replacement.