Executive Director at One for the World; chair of trustees at High Impact Athletes.
Jack Lewars
I understand that you are using this as an example of something you think is untrue and to demonstrate the asymmetrical burden of refuting a lot of claims.
However, if you’re prioritising, I would be most interested in whether it is true that you a) encouraged someone who you had financial and professional power over to drive without a driving licence; and b) encouraged someone in the same situation to smuggle drugs across international borders for you.
Whether or not they are formally an employee, encouraging people you have financial and professional power over to commit crimes unconnected to your mission is deeply unethical (and encouraging them to do this for crimes connected to your mission is also, at best, extremely ethically fraught).
Thanks for writing this Amber. I pretty firmly disagree, but I’m upvoting it anyway because I think we need to discuss these issues in the open, and you’ve put across your point of view in a measured, reasonable way. I hope to draft a response soon with some alternative suggestions.
(Also, as usual, @Peter Wildeford has made most of my points in his comment.)
I think my main disagreement is that this is taking on a straw man argument. I’m not aware of anyone suggesting that we should “prevent people from forming relationships with whom they want”, at least not in the strong sense of “banning” relationships. Indeed, I’ve spoken recently to a couple of people with experience of managing these issues, and they all say “people are free to date whomever they want” (provided the usual caveats about it being fully consensual etc.).
What is being suggested is that there should be clear policies for how these relationships will be handled, to ensure a) the safety of those involved, especially the partner(s) with less power; and b) the safety of the community as whole. Those then give all parties the chance to make informed decisions about whom they form relationships with, and to give informed consent.
Accordingly, I think all EA orgs should adopt a clear “relationships at work” policy (and I hope to release a template for this soon). This wouldn’t say “you can’t date anyone from work”. But it would say:
1. If you have a nonprofessional relationship with someone at/directly related to work, we will actively manage any conflicts of interest, by doing X, Y and Z
2. If you have a nonprofessional relationship with someone at/directly related to work, you have to tell the relevant organisation(s) promptly so they can achieve (1)
3. You can’t use work time to advance your romantic interests, in particular by e.g. propositioning people
4. If you date someone at/related to work and there is a power imbalance, you have a particular duty to think very carefully about this, per @Peter Wildeford ‘s comment
5. Even when you aren’t on work time, we reserve the right for your actions to have professional consequences if they are plausibly harmful (e.g. if someone comes into work on a Monday and says you sexually harassed them on a Friday night, that’s not ‘off limits’ for professional consequences)
Part 1 would involve things like recusing anyone who has a nonprofessional relationship with someone else from any decision about their pay, promotions, disciplinary processes etc., to avoid the obvious conflict of interest here (I commented elsewhere that I am concerned by you saying you would only “probably” find someone a new manager if their current manager starts dating them, although I don’t want to focus in too hard on a single word).
It also involves removing them from decisions about funding, hiring etc. for indirect work relationships.
Part 5 might seem like the most draconian but is based on real examples of people who repeatedly sexually harassed colleagues, but because it happened ‘after hours’ it was never acknowledged or dealt with.If orgs adopted a policy like this from the get-go, it would give everyone involved the chance to give informed consent—that is, they can understand exactly the consequences of their actions in advance and decide what works best for them. This isn’t “preventing people from forming relationships”—it’s allowing them to form relationships in an informed way.
You are right, of course, that this might lead to people not starting a relationship that they otherwise would have, but that’s just tough, I’m afraid—as other commentators have pointed out, this happens all the time in professional settings. Alternatively, they might choose to start the relationship anyway, and then they can choose whether to live with how that is managed or one or other person can seek a job elsewhere, in a different team etc..Anyway, in sum, I think this is the steelman position, and I think its harms (maybe people have fewer relationships/less sex with people they work with) are greatly outweighed by its benefits (a healthier and safer workplace and community, which conforms to wider workplace norms).
Thanks for writing this. I share some of this uneasiness—I think there are reputational risks to EA here, for example by sponsoring people to work in the Bahamas. I’m not saying there isn’t a potential justification for this but the optics of it are really pretty bad.
This also extends to some lazy ‘taking money from internet billionaires’ tropes. I’m not sure how much we should consider bad faith criticisms like this if we believe we’re doing the right thing, but it’s an easy hit piece (and has already been done, e.g. a video attacked someone from the EA community running for congress about being part-funded by Sam Bankman-Fried—I’m deliberately not linking to it here because it’s garbage).
Finally, I worry about wage inflation in EA. EA already mostly pays at the generous end of nonprofit salaries, and some of the massive EA orgs pay private-sector level wages (reasonably, in my view—if you’re managing $600m/year at GiveWell, it’s not unreasonable to be well-paid for that). I’ve spent most of my career arguing that people shouldn’t have to sacrifice a comfortable life if they want to do altruistic work—but it concerns me that entry level positions in EA are now being advertised at what would be CEO-level salaries at other nonprofits. There is a good chance, I think, that EA ends up paying professional staff significantly more to do exactly the same work to exactly the same standard as before, which is a substantive problem; and there is again a real reputational risk here.
Hi Claire—thanks for the extra info here, which is very helpful.
Can you say whether you/Open Phil considered anything here to be a conflict of interest and if so how you managed that?
At a first glance, a trustee of EVF recommending a grant of £10m+ to EVF on behalf of their employer seems like a CoI.
Hi team—this sounds exciting!
I have some questions about how you will align with other university-focussed organisations (with a healthy dose of self-interest, obviously!).
If you are plausibly providing $17m-$54m in annual budget for university organising at just 17 schools, you’re likely going to dwarf every other organisation on campus (and definitely every other EA org). How will you avoid completely eclipsing other organisations with a different focus/approach/merits? A particular concern here would be if this capital overwhelmingly favoured longtermism, for example, which already benefits from an enormous imbalance in allocated EA funding.
I’m also interested in where the funding for this is coming from—this represents a dramatic increase on CEA’s budget and spend on university organising, which you seem confident that you can source.
I’m especially interested in this because this is a huge amount of money on a new initiative, I think without a track record or tested methodology. So in essence it’s a new idea, seeking to develop and test new approaches, but one being supported with a huge influx of capital (relative to other EA resources directed at these campuses). There are obviously risks here, the most obvious being that the programme might be ineffective/much less effective than alternatives, or make mistakes that have quite wide-ranging consequences, but become the dominant player anyway because of a huge asymmetry of resources.
All of this being said, I want to be clear that I’m actually super excited about this ambition and focus and One for the World will of course support as much as we can :-)
Hi Shakeel,
Thanks for this. I agree with your post and upvoted it.
However, I do also wonder if they are following what seems to be a common theme in EA crisis comms recently, which is to say as little as possible (presumably on the basis of legal advice). You wrote about this here: https://forum.effectivealtruism.org/posts/Et7oPMu6czhEd8ExW/why-you-re-not-hearing-as-much-from-ea-orgs-as-you-d-like
I agree with you that just about any comment or explanation from FLI would seem to help, and that passing the email exchange with Max over to Denton’s seems to make the situation worse (slower responses, less full responses, bad optics etc.).
As an outsider (i.e. with no access to the legal advice or internal discussions at any of these orgs), I wonder how the legal risk is being weighed against the reputational risk in EA crisis comms at the moment. It seems like there is almost no communication coming out from EA orgs and leaders, which presumably is very legally safe but can have very damaging reputational consequences.
I expect you’re constrained in what you can say in response to this but, candidly, I think it’s important to note that CEA itself is choosing public silence, albeit about a different issue. Accordingly, I was surprised to see you posting (in a personal capacity) about another org needing to give a full explanation and you not understanding why they haven’t, and especially in such strong terms. CEA’s example probably influences a lot of other orgs within EA.
Is there something I am missing on this? Maybe that the FTX situation is sui generis?
Thanks Ben. A few things:
I read the posts I could find on this topic on the forum, none of which mention a PR agency or hiring a PR team for the movement
I’ve talked about this with a lot of other EA professionals and no one has mentioned that this idea is coming, although all of them thought it was a good idea to some degree
I posted it as an idea for FTX and it wasn’t taken up, without feedback or any suggestion that it was already happening
I visited the many examples of mainstream press criticism of EA cited in other posts and saw no response or comment ‘from EA’
But, most importantly, the things you list here don’t address the suggestion of this post. Individual orgs being advised by PR professionals, or Longview aiming for more press coverage, has at best partial overlap with the effect of a dedicated PR team for EA.
This is also self-evidently not having the effect of countering op-eds like this one (although that might be low priority work).
So, when OFTW, at least one GiveWell charity and presumably others are forwarded this op-ed by potential or actual major donors, and there is crickets ‘from EA’ in response, I think it’s pretty reasonable to say ‘this seems like a good idea—can we make it happen?’
One for the World has had the Double The Donation widget for a couple of years. Unfortunately, it is about to become considerably more expensive, as they are upgrading everyone to a service called Match365. The plus side of this is that it will search your database for people who could be getting matching and email them proactively, and try to smooth the process of them getting matching. The downside is that it’s fairly expensive (~$4k/year), but I think it’d still be positive ROI, at least in the first year (probably with diminishing returns after that).
One thing to point out is that DTD massively overclaims its success rate. For example, it emails me every month saying “52 people claimed matching at their companies via the widget on your website”, but it turns out this means 52 people searched for matching, and I would estimate 1-2 per month actually end up accessing it (based on our overall volume of incoming corporate matching).
One final note - @Neil Warren as you are at Google, is there a reason you aren’t using Benevity to do your donations, which automatically adds matching? If this is new to you, see if you can log in at google.benevity.org
True, but GiveWell doesn’t expect funding to grow at the same rate as top quality funding opportunities, so that $1bn/year is going to need further donors. Unless we believe GiveWell’s top programmes/charities will never have a funding shortfall again, the point about where EA prioritises its funding still seems relevant.
Donating to AMF still seems like a good benchmark for cost effectiveness. Unlike George, my instinct is that e.g. a team retreat for an EA Group is likely to produce considerably less impact than spending the money on bednets or other GiveWell top charities.
Isn’t part of this considering whether Will’s comparative advantage is as a Board member? It seems very unlikely to me that it is, versus being a world class philosopher and communicator.
So I agree with your general point that leaders who make mistakes might not need to resign, but in the specific case I can’t see how Will is most impactful by being a Board member at really any org, as opposed to e.g. a philosophical or grant-making advisor.
This gave me pause for thought, so thank you for writing it. I also respect that you likely won’t engage with this response to protect your own wellbeing.
I just want to raise, however, that I think you have almost completely failed to address a) the power dynamics involved; and b) the apparently uncontroversial claim that people were asked to break laws by people who had professional and financial power over them.
It seems impossible to square the latter with being “honest, honourable and conscientious”.
This format is amazing. More please.
I’m very surprised that you think a 3 person Board is less brittle than a bigger Board with varying levels of value alignment. How do 3 person Boards deal with all the things you list that can affect Board make up? They can’t, because the Board becomes instantly non-quorate.
This post is exceptionally useful, especially for people who don’t know much about crypto (like me)
I guess any of the following might be examples (emphasis on might):
-
it seems bad to buy expensive historic buildings, which don’t seem fit-for-purpose for the proposed use case and have really high running costs—but the people involved are really smart, so...
-
it seems bad to fly people to the Bahamas to do coworking and collaboration, and like this is being driven by a billionaire’s desire for company and personal convenience. It seems like this wouldn’t be the method you would choose if you were starting from a point of maximising impact and cost-effectiveness—but the people seem really smart
-
it seems bad that the largest recipients of funding from the FTX Future Fund are organisations where one of the FTX grantmakers sits on their Board, but...
-
it seems very very very bad to say you would take the bet every time, if someone told you that there was a 51% chance that you’d double the universe and a 49% chance that you’d destroy it, but...
I’m not sure if people did defer to these arguments because of the people making them rather than a sincere belief that they are good, but it seems at least possible (especially the last one).
-
Great post, thanks.
I might elaborate on your last category to include a) well-intentioned high competence people accidentally creating bad systems; and b) well-intentioned high competence people put into bad systems by leadership (so not just leaders but e.g. a community health team trying to deal with sexual harassment by one of their own Board members).
I think your section header covers this, but the body focuses specifically on CEOs and Boards. Lots of people in EA, not just leadership, can end up making mistakes because the systems/policies they work within aren’t fit for purpose.
That’s right, and this was very casually phrased, so thanks for pulling me up on it. A better way of saying this would be: “if you’re going to distribute billions of dollars in funding, in a way that is unusually capable of being harmful, but don’t have the time to explain the reasoning behind that distribution, it’s reasonable to ask you to hire people to do this for you (and hiring is almost certainly necessary for lots of other practical reasons).”
Thanks Jessica, this is helpful, and I really appreciate the speed at which you replied.
A couple of things that might be quick to answer and also helpful:
is there an expected value of someone working in an EA career that CEA uses? The rationale above suggests something like ‘we want to spend as much as top tier employers’ but presumably this relates to an expected value of attracting top talent that would otherwise work at those firms?
I agree that it’s not feasible to produce, let alone publish, a BOTEC on every payout. However, is there a bar that you’re aiming to exceed for the manager of a group to agree to a spending request? Or a threshold where you’d want more consideration about granting funding? I’m sure there are examples of things you wouldn’t fund, or would see as very expensive and would have some rule-of-thumb for agreeing to (off-site residential retreats might be one). Or is it more ‘this seems within the range of things that might help, and we haven’t spent >$1m on this school yet?’
is there any counterfactual discounting? Obviously a lot of very talented people work in EA and/or have left jobs at the employers you mention to work in EA. So what’s the thinking on how this spending will improve the talent in EA?
Thanks for writing this—it definitely makes sense to me and resonates with another discussion we had in the Berlin EA office recently on what counts as “disposable income”.
I would just note three things:
If you are not pledging in order to build up financial security, would you consider pledging to donate these funds later in life if it turns out you don’t need them? A lot of your reasoning seems to be about self-insurance, which makes sense, but in the event that you don’t need the insurance (e.g. because you stay impactfully employed largely without a break for your whole career), it’d be great if you then donated those reserves steadily in your 70s and older. If you don’t want to do this, e.g. because you’d prefer to pass these reserves onto your kids, I think that does seem like it’s close to motivated reasoning (i.e. “it turns out the most impactful thing I can do is build up personal and intergenerational wealth” seems pretty convenient).
And then relatedly:
Depending on your personal circumstances, you might be surprised at how feasible it is to do the GWWC pledge and build up financial security at the same time. I definitely have less money set aside than if I hadn’t pledged, but I equally haven’t had to choose directly between pledging and being financially secure. Something to think about / factor in to your spreadsheet. The excellent Fi-lanthropy Calculator from Yield and Spread is great for showing how donating certain amounts extends your timelines for complete financial security, and it usually makes less of a difference than you think—https://www.yieldandspread.org/free-resources
And then finally, and a minor point:
FWIW, I think the only unconvincing scenario you mention is “if I was worried about EV/CEA/the usefulness of my work, I can imagine leaving without another opportunity lined up.”
In each scenario you mention, I think the correct trade off is “is the security for me not to suffer X more important than the benefit of donating this money?”. So when it comes to caring for your family, I think it’s fair enough to prioritise that pretty highly. But in this scenario, you could just carry on working at CEA while you do your job search, and I think it’s pretty indulgent to say “the ability to resign immediately rather than do my job and look for other jobs on the side is worth more than the value of donating money to help people in extreme poverty”. People become disillusioned in their jobs and start applying for other ones all the time, and I think you could do this as well, without undue hardship :-)
Thanks for making the case. I’m not qualified to say how good a Board member Nick is, but want to pick up on something you said which is widely believed and which I’m highly confident is false.
Namely—it isn’t hard to find competent Board members. There are literally thousands of them out there, and charities outside EA appoint thousands of qualified, diligent Board members every year. I’ve recruited ~20 very good Board members in my career and have never run an open process that didn’t find at least some qualified, diligent people, who did a good job.
EA makes it hard because it’s weirdly resistant to looking outside a very small group of people, usually high status core EAs. This seems to me like one of those unfortunate examples of EA exceptionalism, where EA thinks its process for finding Board members needs to be sui generis. EA makes Board recruitment hard for itself by prioritising ‘alignment’ (which usually means high status core EAs) over competence, sometimes with very bad results (e.g. ending up with a Board that has a lot of philosophers and no lawyers/accountants/governance experts).
It also sometimes sounds like EA orgs think their Boards have higher entry requirements than the Boards of other well-run charities. Ironically, this typically produces very low quality EA Boards, mainly made up of inexperienced people without relevant professional skills, but who are thought of as ‘smart’ and ‘aligned’.
Of course, it will be hard to find new Board members right now, because CEA’s reputation is in tatters and few people will want to join an organisation that is under serious legal threat. But it seems at best a toss up whether it’s worth keeping tainted Board member(s) because they might be tricky to replace, especially when they have recused themselves from literally the single biggest issue facing the charity.