For many years I’ve been trying to figure out a core disagreement I have with a bunch of underlying EA/rationalist school of thought. I think I’ve sort of figured it out: the emphasis on individual action, behavior & achievement over collective. And an understanding of how the collective changes individuals—through emergent properties (e.g. norms, power dynamics, etc.), and an unwillingness to engage.
This has improved a bunch since I first joined in 2017 (the biggest shock to the system was FTX and subsequent scandals). Why I think these issues have existed historically:
General skepticism towards management / bureaucracy
Unwillingness to engage with “fuzzier” social sciences (and unfamiliarity with the same)
Lot of focus on power law distribution of individual talent and performance (and very little on organisational talent & performance)
Too much first-principles wheel re-inventing and rejection of “common sense” best practices
I care a lot about this issue because I think very few problems can be solved with one-man-show(s). I believe an important way the world improves is by building organisations.
It’s not enough to have a superstar technical founder if they can’t find a way to reliably and sustainably product output at scale—which is almost always by finding other people to help them (building a team). I think the key skill to succeed here is organisational competence: “the collective capabilities, skills, and knowledge that an organisation possesses, enabling it to perform effectively and achieve its strategic objectives.”—random reasonable sounding business website. (emphasis mine)
The EA movement skews young and towards IC academics, researchers and engineers rather than operators with experience in leadership, management & operations. In fact, the skillsets needed to be a good IC can even be orthogonal to those of a good director or manager.
I notice the following:
Lack of strong non-technical founder-level operator talent that wants to work on EA priority areas
Current founders/leaders
Failing to recognize the invaluable strategic role that senior operators play (as outlined in this great post).
Hiring, but then not being able to share control of the org with senior operators and/or adapt / make compromises to enable the org to succeed
Being open to change, but not having the time or naturaly tendency towards big-picture thinking (I’ve noticed a common tendency towards perfectionism which can slow people down)
What does strong non-technical founder-level operator talent actually mean concretely? I feel like I see lots of strong people struggle to find any role in the space.
Very curious if you can describe the types of people you know, their profiles, what cause areas and roles they are have applied for, what constraints they have if any.
But typically (not MECE, written quickly, not in order of importance, some combination could work etc.):
Ok thanks for the details. Off the top of my head I can think of multiple people interested in AI safety who probably fit these (though the descriptions I think still could be more concretely operationalized) and fit into categories such as: founders/cofounders, several years experience in operations analytics and management, several years experience in consulting, multiple years experience in events and community building/management. Some want to stay in Europe, some have families, but overall I don’t recall them being super constrained.
Yeah, and I can probably cite like 3-4 other prominent-ish articles. I think these efforts feel more like a bandaid and not like actually changing the fundamental core principles on which you do your thinking.
General skepticism towards management / bureaucracy
and nearly 0 support for a political system within the movement. Decentralized = Money & Status → Power.
Unwillingness to engage with “fuzzier” social sciences (and unfamiliarity with the same)
This movement combines the people who are probably engaging in measurability bias (global dev, animal welfare) and those who don’t care much (longtermists) for the sake of maximizing EV. Both get to be on the Pareto frontier of EV/Variance plane. Things that fall more near the middle of this plane (“medium termism”—culture, politics) get disapproval from both sides.
Too much first-principles wheel re-inventing and rejection of “common sense” best practices
I’m pretty ok with this. I think a lot of breakthroughs require narcissistic delusion to push where a reasonable person might assume the fruit has already been plucked. Maybe on the margins you are right though.
Compare with this quote from MacAskill’s “What We Owe the Future chapter 7”, showing exactly the problem you describe:
If scientists with Einstein-level research abilities
were cloned and trained from an early age, or if human beings were
genetically engineered to have greater research abilities, this could
compensate for having fewer people overall and thereby sustain
technological progress.
That quote seems taken out of context. I don’t know the passage (stagnation chapter?), but I don’t think Will was making that point in relation to what kind of skillset the EA community needs.
Nice. I encountered a similar crux the other week in a career advice chat when someone said “successful people find the skills with which they really excel and exploit that repeatedly to get compounding returns” to which I responded with “well, people aren’t the only things that can have compounding returns, organizations can also have compounding returns, so maybe I should keep helping organizations succeed to capture their compounding returns.”
On the flip side, the fact that EA has focused so much on community building and talent seems like a certain kind of communitarianism, putting the success of the whole above any individual.
Have not thought about compounding returns to orgs! I can think of some concrete examples with AIM ecosystem charities (e.g one org helping bring another into creation or creating a need for others to exist). Food for thought.
Curious how you see the communitarianism playing out in practice?
There’s definitely a cooperative side to things that makes it a lot easier to ask for help amongst EAs than the relevant professional groups someone might be a part of, but not sure I’m seeing obvious implications.
For many years I’ve been trying to figure out a core disagreement I have with a bunch of underlying EA/rationalist school of thought. I think I’ve sort of figured it out: the emphasis on individual action, behavior & achievement over collective. And an understanding of how the collective changes individuals—through emergent properties (e.g. norms, power dynamics, etc.), and an unwillingness to engage.
This has improved a bunch since I first joined in 2017 (the biggest shock to the system was FTX and subsequent scandals). Why I think these issues have existed historically:
General skepticism towards management / bureaucracy
Unwillingness to engage with “fuzzier” social sciences (and unfamiliarity with the same)
Lot of focus on power law distribution of individual talent and performance (and very little on organisational talent & performance)
Too much first-principles wheel re-inventing and rejection of “common sense” best practices
I care a lot about this issue because I think very few problems can be solved with one-man-show(s). I believe an important way the world improves is by building organisations.
It’s not enough to have a superstar technical founder if they can’t find a way to reliably and sustainably product output at scale—which is almost always by finding other people to help them (building a team). I think the key skill to succeed here is organisational competence: “the collective capabilities, skills, and knowledge that an organisation possesses, enabling it to perform effectively and achieve its strategic objectives.”—random reasonable sounding business website. (emphasis mine)
The EA movement skews young and towards IC academics, researchers and engineers rather than operators with experience in leadership, management & operations. In fact, the skillsets needed to be a good IC can even be orthogonal to those of a good director or manager.
I notice the following:
Lack of strong non-technical founder-level operator talent that wants to work on EA priority areas
Current founders/leaders
Failing to recognize the invaluable strategic role that senior operators play (as outlined in this great post).
Hiring, but then not being able to share control of the org with senior operators and/or adapt / make compromises to enable the org to succeed
Being open to change, but not having the time or naturaly tendency towards big-picture thinking (I’ve noticed a common tendency towards perfectionism which can slow people down)
What does strong non-technical founder-level operator talent actually mean concretely? I feel like I see lots of strong people struggle to find any role in the space.
Very curious if you can describe the types of people you know, their profiles, what cause areas and roles they are have applied for, what constraints they have if any.
But typically (not MECE, written quickly, not in order of importance, some combination could work etc.):
“Relentlessly resourceful” from Paul Graham covers a bunch of it better than I could
Strong intuitions for people management and org building (likely from experience)
Strong manager / leader
Gets shit done—can move quickly, decisive, keeps the momentum going, strong prioritization skills
Cares about structure and can implement systems, but only when they actually matter for helping the org achieve their goals
Able to switch between object level, in the weeds work and strategic thinking, and willing to do in the weeds work if needed (for earlier stage orgs)
Would you say that Teryn Mattox or Dan Brown at GiveWell or Alexander Berger or Emily Oehlsen at Open Philanthropy meet this description?
Ok thanks for the details. Off the top of my head I can think of multiple people interested in AI safety who probably fit these (though the descriptions I think still could be more concretely operationalized) and fit into categories such as: founders/cofounders, several years experience in operations analytics and management, several years experience in consulting, multiple years experience in events and community building/management. Some want to stay in Europe, some have families, but overall I don’t recall them being super constrained.
80K to their credit have been trying to push back on single-player thinking since at least 2016 but it doesn’t seem to have percolated more widely.
Yeah, and I can probably cite like 3-4 other prominent-ish articles. I think these efforts feel more like a bandaid and not like actually changing the fundamental core principles on which you do your thinking.
and nearly 0 support for a political system within the movement. Decentralized = Money & Status → Power.
This movement combines the people who are probably engaging in measurability bias (global dev, animal welfare) and those who don’t care much (longtermists) for the sake of maximizing EV. Both get to be on the Pareto frontier of EV/Variance plane. Things that fall more near the middle of this plane (“medium termism”—culture, politics) get disapproval from both sides.
I’m pretty ok with this. I think a lot of breakthroughs require narcissistic delusion to push where a reasonable person might assume the fruit has already been plucked. Maybe on the margins you are right though.
So keen to hear from the disagrees (currently about 21% (5/23) votes) on which parts folks disagree with!
Whether or not this is the right decision is highly circumstantial.
Honestly, I’d typically prefer an organisation to fail than compromise its mission.
The ‘enable the org to succeed’ implies ‘at it’s stated goals or mission’.
Like, the by the org or it’s leaders own lights.
Strongly up voted.
Compare with this quote from MacAskill’s “What We Owe the Future chapter 7”, showing exactly the problem you describe:
That quote seems taken out of context. I don’t know the passage (stagnation chapter?), but I don’t think Will was making that point in relation to what kind of skillset the EA community needs.
Nice. I encountered a similar crux the other week in a career advice chat when someone said “successful people find the skills with which they really excel and exploit that repeatedly to get compounding returns” to which I responded with “well, people aren’t the only things that can have compounding returns, organizations can also have compounding returns, so maybe I should keep helping organizations succeed to capture their compounding returns.”
On the flip side, the fact that EA has focused so much on community building and talent seems like a certain kind of communitarianism, putting the success of the whole above any individual.
Have not thought about compounding returns to orgs! I can think of some concrete examples with AIM ecosystem charities (e.g one org helping bring another into creation or creating a need for others to exist). Food for thought.
Curious how you see the communitarianism playing out in practice?
There’s definitely a cooperative side to things that makes it a lot easier to ask for help amongst EAs than the relevant professional groups someone might be a part of, but not sure I’m seeing obvious implications.
I’m only saying it’s in tension with the diagnosis as “emphasis on individual action, behavior & achievement over collective.”
I agree with all of your concrete discussion and think it’s important.