When Benjamin_Todd wanted to encourage new projects by mentioning $100M+ size orgs and CSET, my take was that he wanted to increase awareness of an important class of orgs that can now be built.
In this spirit, I think there might be some perspectives not mentioned yet in the consequent discussions:
1. Projects with 100m+ of required capital/talent has different patterns of founding and success
There may be reasons why building such 100m+ projects are different both from many smaller “hits based” funding of Open Phil projects (as a high chance of failure is unacceptable) and also different than the GiveWell-style interventions.
One reason is that orgs like OpenAI and CSET require such scale just to get started, e.g. to interest the people involved:
Here are examples of members of the founding team of OpenAI and CSET:
If you look at these profiles, I think you can infer that if you have an org that is capped at $10M, or has to internalize a GiveWell-style cost effectiveness aesthetic, this wouldn’t work and nothing would be founded. The people wouldn’t be interested (as another datapoint, see 1M salaries at OpenAI).
2. Skillset and training patterns might differ than previous patterns used in the EA movement
I think it’s important to add nuance to a 80,000 hours-style article of “get 100m+ org skills”:
Being an executive at a charity that delivers a tangible product requires different skills to running a research or advocacy charity. A smaller charity will likely need to recruit all-rounders who are pretty good at strategy, finance, communications and more. In contrast, in a $100 million you will also need people with specialized skills and experience in areas like negotiating contracts or managing supply chains.
Note that being good at the most senior levels usually involves mastering or being fluent in many smaller, lower status skills.
For evidence, when you work together with them, you often see senior leaders flaunting or actively using these skills, when they don’t apparently have to.
This is because the gears-level knowledge improves judgement of all decisions (e.g. “kicking tires”/”tasting the soup”).
Also, the most important skill of senior leaders is fostering and selecting staff and other leaders, and again, gears-level observation of these skills is essential to such judgement.
specialized skills and experience in areas like negotiating contracts or managing supply chains.
Note that in a 100M+ org, these specialized skills can be fungible in way that “communication” or “strategy” is not.
If you want to start or join an EA charity that can scale to $100 million per year, you should consider developing skills in managing large-scale projects in industry, government or another large charity in addition to building relationships and experience within the EA community.
From the primal motivation of impact and under the premise in Benjamin_Todd’s statement, I think we would expect the goal is to try to create these big projects within 3 or 5 years.
Some of these skills, especially founding a 100M+ org, would be extremely difficult to acquire within this time.
There are other reasons to be cautious:
Note that approximately every ambitious person wants these skills and profile, and this set of people is immensely larger than the set of people in more specialized skill sets (ML, science, economics, policy) that has been encouraged in the past.
The skills are hard to observe (outputs like papers or talks are far less substantive, and blogging/internet discussion is often looked down).
The skillsets and characters can be orthogonal or opposed to EA traits such as conscientiousness or truth seeking.
Related to the above, free-riding and other behavior that pools with altruism is often used to mask very conventional ambition (see Theranos, and in some points of view, approximately every SV startup).
I guess my point is that I don’t want to see EAs get Rickon’d by running in a straight line in some consequence of these discussions.
Note that underlying all of this is a worldview that views founder effects/relationships/leadership as critical and the founders as not fungible.
It’s important to explicitly notice this, as this worldview may be very valid for some interventions but is not for others.
It is easy for these worldviews to spill over harmfully, especially if packaged with the high status we might expect to be associated with new EA mega projects.
3. Pools of EA leaders already exist
I also think there exists large pool of EA-aligned people (across all cause areas/worldviews) who have the judgement to lead such orgs but may not feel fully comfortable creating and carrying them from scratch.
Expanding on this, I mean that, conditional on seeing an org with them in the top role, I would trust the org and the alignment. However, these people may not want to work with the intensity or deal with the operational and political issues (e.g. put down activist revolts, noxious patterns such as “let fires burn”, and winning two games of funding and impact).
This might leave open important opportunities related to training and other areas of support.
There may be reasons why building such 100m+ projects are different both from many smaller “hits based” funding of Open Phil projects (as a high chance of failure is unacceptable) and also different than the GiveWell-style interventions.
One reason is that orgs like OpenAI and CSET require such scale just to get started, e.g. to interest the people involved
This sounds like CSET is a 100m+ project. Their OpenPhil grant was for $11m/year for 5 years, and wikipedia says they got a couple of millions from other sources, so my guess is they’re currently spending like $10m-$20m / year.
When Benjamin_Todd wanted to encourage new projects by mentioning $100M+ size orgs and CSET, my take was that he wanted to increase awareness of an important class of orgs that can now be built.
In this spirit, I think there might be some perspectives not mentioned yet in the consequent discussions:
1. Projects with 100m+ of required capital/talent has different patterns of founding and success
There may be reasons why building such 100m+ projects are different both from many smaller “hits based” funding of Open Phil projects (as a high chance of failure is unacceptable) and also different than the GiveWell-style interventions.
One reason is that orgs like OpenAI and CSET require such scale just to get started, e.g. to interest the people involved:
Here are examples of members of the founding team of OpenAI and CSET:
CSET—Jason Matheny—https://en.wikipedia.org/wiki/Jason_Gaverick_Matheny
OpenAI—Sam Altman—https://en.wikipedia.org/wiki/Sam_Altman
If you look at these profiles, I think you can infer that if you have an org that is capped at $10M, or has to internalize a GiveWell-style cost effectiveness aesthetic, this wouldn’t work and nothing would be founded. The people wouldn’t be interested (as another datapoint, see 1M salaries at OpenAI).
2. Skillset and training patterns might differ than previous patterns used in the EA movement
I think it’s important to add nuance to a 80,000 hours-style article of “get 100m+ org skills”:
Note that being good at the most senior levels usually involves mastering or being fluent in many smaller, lower status skills.
For evidence, when you work together with them, you often see senior leaders flaunting or actively using these skills, when they don’t apparently have to.
This is because the gears-level knowledge improves judgement of all decisions (e.g. “kicking tires”/”tasting the soup”).
Also, the most important skill of senior leaders is fostering and selecting staff and other leaders, and again, gears-level observation of these skills is essential to such judgement.
Note that in a 100M+ org, these specialized skills can be fungible in way that “communication” or “strategy” is not.
From the primal motivation of impact and under the premise in Benjamin_Todd’s statement, I think we would expect the goal is to try to create these big projects within 3 or 5 years.
Some of these skills, especially founding a 100M+ org, would be extremely difficult to acquire within this time.
There are other reasons to be cautious:
Note that approximately every ambitious person wants these skills and profile, and this set of people is immensely larger than the set of people in more specialized skill sets (ML, science, economics, policy) that has been encouraged in the past.
The skills are hard to observe (outputs like papers or talks are far less substantive, and blogging/internet discussion is often looked down).
The skillsets and characters can be orthogonal or opposed to EA traits such as conscientiousness or truth seeking.
Related to the above, free-riding and other behavior that pools with altruism is often used to mask very conventional ambition (see Theranos, and in some points of view, approximately every SV startup).
I guess my point is that I don’t want to see EAs get Rickon’d by running in a straight line in some consequence of these discussions.
Note that underlying all of this is a worldview that views founder effects/relationships/leadership as critical and the founders as not fungible.
It’s important to explicitly notice this, as this worldview may be very valid for some interventions but is not for others.
It is easy for these worldviews to spill over harmfully, especially if packaged with the high status we might expect to be associated with new EA mega projects.
3. Pools of EA leaders already exist
I also think there exists large pool of EA-aligned people (across all cause areas/worldviews) who have the judgement to lead such orgs but may not feel fully comfortable creating and carrying them from scratch.
Expanding on this, I mean that, conditional on seeing an org with them in the top role, I would trust the org and the alignment. However, these people may not want to work with the intensity or deal with the operational and political issues (e.g. put down activist revolts, noxious patterns such as “let fires burn”, and winning two games of funding and impact).
This might leave open important opportunities related to training and other areas of support.
I strong upvoted this because I think it’s really important to consider in what situations you should NOT try to develop these kinds of skills!
This sounds like CSET is a 100m+ project. Their OpenPhil grant was for $11m/year for 5 years, and wikipedia says they got a couple of millions from other sources, so my guess is they’re currently spending like $10m-$20m / year.
Yes, I wouldn’t say CSET is a mega project, though more CSET-like things would also be amazing.
Thank you for pointing this out.
You are right, and I think maybe even a reasonable guess is that CSET funding is starting out at less than 10M a year.