Executive Director at One for the World; chair of trustees at High Impact Athletes.
Jack Lewars
What might FTX mean for effective giving and EA funding
EA in the mainstream media: if you’re not at the table, you’re on the menu
FTX/CEA—show us your numbers!
Why your EA group should promote effective giving (and how)
Open Board recruitment should be a norm
Book a corporate event for Giving Season
I understand that you are using this as an example of something you think is untrue and to demonstrate the asymmetrical burden of refuting a lot of claims.
However, if you’re prioritising, I would be most interested in whether it is true that you a) encouraged someone who you had financial and professional power over to drive without a driving licence; and b) encouraged someone in the same situation to smuggle drugs across international borders for you.
Whether or not they are formally an employee, encouraging people you have financial and professional power over to commit crimes unconnected to your mission is deeply unethical (and encouraging them to do this for crimes connected to your mission is also, at best, extremely ethically fraught).
Thanks for writing this Amber. I pretty firmly disagree, but I’m upvoting it anyway because I think we need to discuss these issues in the open, and you’ve put across your point of view in a measured, reasonable way. I hope to draft a response soon with some alternative suggestions.
(Also, as usual, @Peter Wildeford has made most of my points in his comment.)
I think my main disagreement is that this is taking on a straw man argument. I’m not aware of anyone suggesting that we should “prevent people from forming relationships with whom they want”, at least not in the strong sense of “banning” relationships. Indeed, I’ve spoken recently to a couple of people with experience of managing these issues, and they all say “people are free to date whomever they want” (provided the usual caveats about it being fully consensual etc.).
What is being suggested is that there should be clear policies for how these relationships will be handled, to ensure a) the safety of those involved, especially the partner(s) with less power; and b) the safety of the community as whole. Those then give all parties the chance to make informed decisions about whom they form relationships with, and to give informed consent.
Accordingly, I think all EA orgs should adopt a clear “relationships at work” policy (and I hope to release a template for this soon). This wouldn’t say “you can’t date anyone from work”. But it would say:
1. If you have a nonprofessional relationship with someone at/directly related to work, we will actively manage any conflicts of interest, by doing X, Y and Z
2. If you have a nonprofessional relationship with someone at/directly related to work, you have to tell the relevant organisation(s) promptly so they can achieve (1)
3. You can’t use work time to advance your romantic interests, in particular by e.g. propositioning people
4. If you date someone at/related to work and there is a power imbalance, you have a particular duty to think very carefully about this, per @Peter Wildeford ‘s comment
5. Even when you aren’t on work time, we reserve the right for your actions to have professional consequences if they are plausibly harmful (e.g. if someone comes into work on a Monday and says you sexually harassed them on a Friday night, that’s not ‘off limits’ for professional consequences)
Part 1 would involve things like recusing anyone who has a nonprofessional relationship with someone else from any decision about their pay, promotions, disciplinary processes etc., to avoid the obvious conflict of interest here (I commented elsewhere that I am concerned by you saying you would only “probably” find someone a new manager if their current manager starts dating them, although I don’t want to focus in too hard on a single word).
It also involves removing them from decisions about funding, hiring etc. for indirect work relationships.
Part 5 might seem like the most draconian but is based on real examples of people who repeatedly sexually harassed colleagues, but because it happened ‘after hours’ it was never acknowledged or dealt with.If orgs adopted a policy like this from the get-go, it would give everyone involved the chance to give informed consent—that is, they can understand exactly the consequences of their actions in advance and decide what works best for them. This isn’t “preventing people from forming relationships”—it’s allowing them to form relationships in an informed way.
You are right, of course, that this might lead to people not starting a relationship that they otherwise would have, but that’s just tough, I’m afraid—as other commentators have pointed out, this happens all the time in professional settings. Alternatively, they might choose to start the relationship anyway, and then they can choose whether to live with how that is managed or one or other person can seek a job elsewhere, in a different team etc..Anyway, in sum, I think this is the steelman position, and I think its harms (maybe people have fewer relationships/less sex with people they work with) are greatly outweighed by its benefits (a healthier and safer workplace and community, which conforms to wider workplace norms).
Join our collaboration for high quality EA outreach events (OFTW + GWWC + EA Community)
Thanks for writing this. I share some of this uneasiness—I think there are reputational risks to EA here, for example by sponsoring people to work in the Bahamas. I’m not saying there isn’t a potential justification for this but the optics of it are really pretty bad.
This also extends to some lazy ‘taking money from internet billionaires’ tropes. I’m not sure how much we should consider bad faith criticisms like this if we believe we’re doing the right thing, but it’s an easy hit piece (and has already been done, e.g. a video attacked someone from the EA community running for congress about being part-funded by Sam Bankman-Fried—I’m deliberately not linking to it here because it’s garbage).
Finally, I worry about wage inflation in EA. EA already mostly pays at the generous end of nonprofit salaries, and some of the massive EA orgs pay private-sector level wages (reasonably, in my view—if you’re managing $600m/year at GiveWell, it’s not unreasonable to be well-paid for that). I’ve spent most of my career arguing that people shouldn’t have to sacrifice a comfortable life if they want to do altruistic work—but it concerns me that entry level positions in EA are now being advertised at what would be CEO-level salaries at other nonprofits. There is a good chance, I think, that EA ends up paying professional staff significantly more to do exactly the same work to exactly the same standard as before, which is a substantive problem; and there is again a real reputational risk here.
Hi Claire—thanks for the extra info here, which is very helpful.
Can you say whether you/Open Phil considered anything here to be a conflict of interest and if so how you managed that?
At a first glance, a trustee of EVF recommending a grant of £10m+ to EVF on behalf of their employer seems like a CoI.
How we promoted EA at a large tech company (v2.0)
Lessons and results from workplace giving talks
Hi team—this sounds exciting!
I have some questions about how you will align with other university-focussed organisations (with a healthy dose of self-interest, obviously!).
If you are plausibly providing $17m-$54m in annual budget for university organising at just 17 schools, you’re likely going to dwarf every other organisation on campus (and definitely every other EA org). How will you avoid completely eclipsing other organisations with a different focus/approach/merits? A particular concern here would be if this capital overwhelmingly favoured longtermism, for example, which already benefits from an enormous imbalance in allocated EA funding.
I’m also interested in where the funding for this is coming from—this represents a dramatic increase on CEA’s budget and spend on university organising, which you seem confident that you can source.
I’m especially interested in this because this is a huge amount of money on a new initiative, I think without a track record or tested methodology. So in essence it’s a new idea, seeking to develop and test new approaches, but one being supported with a huge influx of capital (relative to other EA resources directed at these campuses). There are obviously risks here, the most obvious being that the programme might be ineffective/much less effective than alternatives, or make mistakes that have quite wide-ranging consequences, but become the dominant player anyway because of a huge asymmetry of resources.
All of this being said, I want to be clear that I’m actually super excited about this ambition and focus and One for the World will of course support as much as we can :-)
Hi Shakeel,
Thanks for this. I agree with your post and upvoted it.
However, I do also wonder if they are following what seems to be a common theme in EA crisis comms recently, which is to say as little as possible (presumably on the basis of legal advice). You wrote about this here: https://forum.effectivealtruism.org/posts/Et7oPMu6czhEd8ExW/why-you-re-not-hearing-as-much-from-ea-orgs-as-you-d-like
I agree with you that just about any comment or explanation from FLI would seem to help, and that passing the email exchange with Max over to Denton’s seems to make the situation worse (slower responses, less full responses, bad optics etc.).
As an outsider (i.e. with no access to the legal advice or internal discussions at any of these orgs), I wonder how the legal risk is being weighed against the reputational risk in EA crisis comms at the moment. It seems like there is almost no communication coming out from EA orgs and leaders, which presumably is very legally safe but can have very damaging reputational consequences.
I expect you’re constrained in what you can say in response to this but, candidly, I think it’s important to note that CEA itself is choosing public silence, albeit about a different issue. Accordingly, I was surprised to see you posting (in a personal capacity) about another org needing to give a full explanation and you not understanding why they haven’t, and especially in such strong terms. CEA’s example probably influences a lot of other orgs within EA.
Is there something I am missing on this? Maybe that the FTX situation is sui generis?
Thanks Ben. A few things:
I read the posts I could find on this topic on the forum, none of which mention a PR agency or hiring a PR team for the movement
I’ve talked about this with a lot of other EA professionals and no one has mentioned that this idea is coming, although all of them thought it was a good idea to some degree
I posted it as an idea for FTX and it wasn’t taken up, without feedback or any suggestion that it was already happening
I visited the many examples of mainstream press criticism of EA cited in other posts and saw no response or comment ‘from EA’
But, most importantly, the things you list here don’t address the suggestion of this post. Individual orgs being advised by PR professionals, or Longview aiming for more press coverage, has at best partial overlap with the effect of a dedicated PR team for EA.
This is also self-evidently not having the effect of countering op-eds like this one (although that might be low priority work).
So, when OFTW, at least one GiveWell charity and presumably others are forwarded this op-ed by potential or actual major donors, and there is crickets ‘from EA’ in response, I think it’s pretty reasonable to say ‘this seems like a good idea—can we make it happen?’
One for the World has had the Double The Donation widget for a couple of years. Unfortunately, it is about to become considerably more expensive, as they are upgrading everyone to a service called Match365. The plus side of this is that it will search your database for people who could be getting matching and email them proactively, and try to smooth the process of them getting matching. The downside is that it’s fairly expensive (~$4k/year), but I think it’d still be positive ROI, at least in the first year (probably with diminishing returns after that).
One thing to point out is that DTD massively overclaims its success rate. For example, it emails me every month saying “52 people claimed matching at their companies via the widget on your website”, but it turns out this means 52 people searched for matching, and I would estimate 1-2 per month actually end up accessing it (based on our overall volume of incoming corporate matching).
One final note - @Neil Warren as you are at Google, is there a reason you aren’t using Benevity to do your donations, which automatically adds matching? If this is new to you, see if you can log in at google.benevity.org
True, but GiveWell doesn’t expect funding to grow at the same rate as top quality funding opportunities, so that $1bn/year is going to need further donors. Unless we believe GiveWell’s top programmes/charities will never have a funding shortfall again, the point about where EA prioritises its funding still seems relevant.
Donating to AMF still seems like a good benchmark for cost effectiveness. Unlike George, my instinct is that e.g. a team retreat for an EA Group is likely to produce considerably less impact than spending the money on bednets or other GiveWell top charities.
Isn’t part of this considering whether Will’s comparative advantage is as a Board member? It seems very unlikely to me that it is, versus being a world class philosopher and communicator.
So I agree with your general point that leaders who make mistakes might not need to resign, but in the specific case I can’t see how Will is most impactful by being a Board member at really any org, as opposed to e.g. a philosophical or grant-making advisor.
Thanks for making the case. I’m not qualified to say how good a Board member Nick is, but want to pick up on something you said which is widely believed and which I’m highly confident is false.
Namely—it isn’t hard to find competent Board members. There are literally thousands of them out there, and charities outside EA appoint thousands of qualified, diligent Board members every year. I’ve recruited ~20 very good Board members in my career and have never run an open process that didn’t find at least some qualified, diligent people, who did a good job.
EA makes it hard because it’s weirdly resistant to looking outside a very small group of people, usually high status core EAs. This seems to me like one of those unfortunate examples of EA exceptionalism, where EA thinks its process for finding Board members needs to be sui generis. EA makes Board recruitment hard for itself by prioritising ‘alignment’ (which usually means high status core EAs) over competence, sometimes with very bad results (e.g. ending up with a Board that has a lot of philosophers and no lawyers/accountants/governance experts).
It also sometimes sounds like EA orgs think their Boards have higher entry requirements than the Boards of other well-run charities. Ironically, this typically produces very low quality EA Boards, mainly made up of inexperienced people without relevant professional skills, but who are thought of as ‘smart’ and ‘aligned’.
Of course, it will be hard to find new Board members right now, because CEA’s reputation is in tatters and few people will want to join an organisation that is under serious legal threat. But it seems at best a toss up whether it’s worth keeping tainted Board member(s) because they might be tricky to replace, especially when they have recused themselves from literally the single biggest issue facing the charity.