Private equity investor (E2G)
Co-Treasurer @ EA UK
Trustee @ EA for Christians
Trustee @ ICM UK
Director @ EA Good Governance Project
MBA @ INSEAD
Private equity investor (E2G)
Co-Treasurer @ EA UK
Trustee @ EA for Christians
Trustee @ ICM UK
Director @ EA Good Governance Project
MBA @ INSEAD
I appreciate this comment
Highly engaged EAs were much more likely to select research (25.0% vs 15.1%) and much less likely to select earning to give (5.7% vs 15.7%)
are you sure this isn’t just a function of the definition of highly engaged?
Yes! A rather important typo! I’ve now fixed
The Parable of the Good Samaritan seems to lean towards impartiality. Although the injured man was laying in front of the Samaritan (geographic proximity), the Samaritan was considered a foreigner / enemy (no proximity of relationship).
Did the EV US Board consider running an open recruitment process and inviting applications from people outside of their immediate circle? If so, why did it decide against?
Thanks, Ben. This is a really thoughtful post.
I wondered if you had any update on the blurring between EA and longtermism. I‘ve seen a lot of criticism of EA that is really just low quality criticism of longtermism because the conclusions can be weird.
Sorry if I wasn’t clear. My claim was not “Every organisation has a COO); it was “If an organisation has a COO, the department they manage is typically front-office rather than back-office and often the largest department”.
For Apple, they do indeed manage front-office operations: “Jeff Williams is Apple’s chief operating officer reporting to CEO Tim Cook. He oversees Apple’s entire worldwide operations, as well as customer service and support. He leads Apple’s renowned design team and the software and hardware engineering for Apple Watch. Jeff also drives the company’s health initiatives, pioneering new technologies and advancing medical research to empower people to better understand and manage their health and fitness.”
For Amazon, I couldn’t find a COO of the entire company though it looks like they exist for the business units.
I also found these charts a little confusing. A single value for each or a clustered column chart might be clearer
Two quick points:
Yes, legal control is the first consideration, but governance requires skill not just value-alignment
I think in 2023 the skills you want largely exist within the community; it’s just that (a) people can’t find them easily (hence I founded the EA Good Governance Project) and (b) people need to be willing to appoint outside their clique
Thanks, Ben. I agree with what you are saying. However, I think that on a practical level, what you are arguing for is not what happens. EA boards tend to be filled with people who work full-time in EA roles, not by fully-aligned talent individuals from the private sector (e.g. lawyers, corporate managers) who might be earning to give having followed 80k’s advice 10 years ago
Thanks for this! You might be right about the non-profit vs. for-profit distinction in ‘operations’ and your point about the COO being ‘Operating’ rather than ‘Operations’ is a good one.
Re avoiding managers doing paperwork, I agree with that way of putting it. However, I think EA needs to recognise that management is an entirely different skill. The best researcher at a research organization should definitely not have to handle lots of paperwork, but I’d argue they probably shouldn’t be the manager in the first place! Management is a very different skillset that involves people management, financial planning, etc. that are often skills pushed on operations teams by people who shouldn’t be managers.
Most organizations do not divide tasks between core and non-core. The ones that do (and are probably most similar to a lot of EA orgs) are professional services ones
Administration definitely sounds less appealing, but maybe it would be more honest and reduce churn?
I don’t work in ops or within an EA org, but my observation from the outside is that the way EA does ops is very weird. Note these are my impressions from the outside so may not be reflective of the truth:
The term “Operations” is not used in the same way outside EA. In EA, it normally seems to mean “everything back office that the CEO doesn’t care about as long as it’s done. Outside of EA, it normally means the main function of the organisation (the COO normally has the highest number of people reporting to them after the CEO)
EA takes highly talented people and gives them menial roles because value-alignment is more important than experience and cost-effectiveness
People in EA have a lower tolerance for admin, possibly because they elevate themselves to a high level of importance. I‘ve worked with very senior and very busy company executives in the normal world and they reply to my emails. Yet in EA, it feels like once you have 2 years of experience in EA, you are too important to read your own emails and need somebody with 1 year of experience to do it for you
EA has so many small organizations and there seems to be so much reinventing the wheel, yet when it comes to specialists there are none
Managers within EA don’t seem to realise that some things they call operations are actually management responsibilities, and that to be a manager you need to be willing to less or maybe none of the day job, e.g. the CEO of a large research organisation should probably not do research anymore
There are some very competent leaders within EA so I don’t think we should make sweeping assumptions. I think we need to make EA a meritocracy
There are so many parallels to the Christian church
@Jack Lewars is spot on. If you don’t believe him, take a look at the list of ~70 individuals on the EA Good Governance Project’s trustee directory. In order to effectively govern you need competence and no collective blindspots, not just value alignment.
Thanks, Joey. Really appreciate you taking the time to engage on these questions.
To be clear, I’m not seriously suggesting ignoring all research from before the decision. I’m just saying that mathematically, an independent test needs its backrest data to exclude all calibration data.
It strikes me that there are broadly 3 buckets of risk / potential failure:
Execution risk—this is significant and you can only find out by trying, but you only really know if you’re being successful with the left hand side of the theory of change
Logic risk—having an external organisation take a completely fresh view should solve most of this
Evidence risk—even with an external organisation marking your homework, they are still probably drawing on the same pool of research and that might suffer from survivorship bias
Thank you for writing this and for all the work you (and others) have put in over the years.
My question is to what extent you think CE’s impact measurement is tautological. If you determine something to be a high impact opportunity and then go and do it, aren’t you by definition doing things you estimate to be high impact (as long as you don’t screw up the execution or realise you made an error). To full adjust for selection effect, you would have to ignore all research conducted before the decision was made and rely solely on new data, which is probably quite hard to come by.
The 40% seems very high. For-profit start-ups have a much higher failure rate. If that’s true, that’s incredible, but I’d expect to see more like 5% of charities and 50% of funds.
We should all try to maximise our impact and there’s a good argument for specialisation.
However, I’m concerned by a few things:
Its not obvious to me that spending more money on yourself will make you better at your job
There’s a danger of arrogance clouding our judgment, e.g. I don’t think 99% of people in EA should be flying business class
Donating has value for many people due to the “skin in the game” effect