Free-spending EA might be a big problem for optics and epistemics
NB: I think EA spending is probably a very good thing overall and I’m not confident my concerns necessarily warrant changing much. But I think it’s important to be aware of the specific ways this can go wrong and hopefully identify mitigations. Thanks to Marka Ellertson, Joe Benton, Andrew Garber, Dewi Erwan, Joshua Monrad and Jake Mendel for their input.
Summary
The influx of EA funding is brilliant news, but it has also left many EAs feeling uncomfortable. I share this feeling of discomfort and propose two concrete concerns which I have recently come across.
Optics: EA spending is often perceived as wasteful and self-serving, creating a problematic image which could lead to external criticism, outreach issues and selection effects.
Epistemics: Generous funding has provided extrinsic incentives for being EA/longtermist which are exciting but also significantly increase the risks of motivated reasoning and make the movement more reliant on the judgement of a small number of grantmakers.
I don’t really know what to do about this (especially since it’s overall very positive), so I give a few uncertain suggestions but mainly hope that others will have ideas and that this will at least serve as a call to vigilance in the midst of funding excitement.
Introduction
In recent years, the EA movement has received an influx of funding. Most notably, Dustin Moskovitz, Cari Tuna and Sam Bankman-Fried have each pledged billions of dollars, such that funding is more widely available and deployed.
This influx of funding has completely changed the game. First and foremost, it is wonderful news for those of us who care deeply about doing the most good and tackling the huge problems which we have been discussing for years. It should accelerate our progress significantly and I am very grateful that this is the case. But it has also had a drastic effect on the culture of the movement which may have unfortunate consequences.
A few years ago, I remember EA meet-ups where we’d be united by our discomfort towards spending money in fancy restaurants because of the difference it could make if donated to effective charities. Now, EA chapters will pay for weekly restaurant dinners to incentivise discussion and engagement. Many of my early EA friends also found it difficult to spend money on holidays. Now, we are told that one of the most impactful things university groups can do is host an all-expenses-paid retreat for their students.
I should emphasise here that I think these expenditures are probably good ideas which can be justified by the counterfactual engagement which they facilitate. These should probably continue to happen, however uncomfortable they make us feel.
But the fact that these decisions can be justified on one level doesn’t mean that they don’t also cause concrete problems which we should think about and mitigate.
Big Spending as an Optics Issue
Over the past few months, I’ve heard critical comments about a range of spending decisions. Several people asked me whether it was really a good use of EA money to pay for my transatlantic flights for EAG. Others challenged whether EAs seriously claim that the most effective way to spend money is to send privileged university students to an AirBNB for the weekend. And that’s before they hear about the Bahamas visitor programme…
In fact, I have recently found myself responding to spending objections more often than the standard substantive ones (e.g. what about my favourite charity?, can you really compare charities with each other?, what about systemic issues?).
I am not contesting here whether these programmes are worth the money. My own view is that most of them probably are and I try to lay this out to those who ask. But it is the perceptions which I find most concerning: many people see the current state of the movement and intuitively conclude that lots of EA spending is not only wasteful but also self-serving, straying far from what you’d expect the principles of an ‘effective altruism’ movement to be. Given the optics issues which have hindered the progress of EA in the past, we should be wary of this dynamic.
Importantly, I’ve heard this claim not only from critics of EA, but also from committed group members and an aligned student who might otherwise be more involved. This suggests that aside from opening us up to external criticism from people who don’t like EA anyway, spending optics may also hinder outreach and lead to selection effects, whereby proto-EAs who are uncomfortable with how money is spent are put off the movement and less likely to get involved. (I am grateful to Marka Ellertson and Joshua Monrad, who both raised versions of this valuable point.)
Longtermism vs Neartermism
One especially problematic framing concerns the apparent discrepancy between longtermist and neartermist funding. Many people find it understandably confusing to hear that ‘EA currently has more money than it can spend effectively’ whilst also noticing that problems like malaria and extreme poverty still exist, especially given how much EA focuses on how cheap it is to save a life and how important it is to practise what we preach.
I don’t claim that more money should necessarily go to neartermist areas, but I fear that excellent people who initially come to EA through a global health or animal welfare route may be put off by this dynamic and leave the movement entirely, especially if it isn’t explained with nuance and sensitivity. This is a comment which I have heard repeatedly over recent months and I am concerned that it could become a significant obstacle to EA movement-building, including for future longtermists.
Coordination and the Unilateralist’s Curse
Longtermists often mention the unilateralist’s curse as a problem associated with various x-risks. Even if the vast majority of altruistic actors behave sensibly, it only takes one reaching a different decision to the group to cause the catastrophe. It seems to me that similar dynamics exist with EA spending. Even if most funders are careful with regard to the optics, it only takes one misstep to attract headlines and stick in people’s heads. Given past experience with ‘earning to give’, this should be especially concerning for the movement.
Financial Incentives as an Epistemics Issue
Several years ago before the increase in funding, it didn’t pay to be EA. In fact, it was rather costly: financially costly because it usually involved a commitment to give away a lot of one’s resources, and socially costly because most people have an intuitive aversion to EA principles. As a result, most people around EA were probably there because they had thought hard and were really convinced that it was morally right.
In 2022, this is no longer necessarily the case. Suddenly, being an EA is exciting for a bunch of extrinsic reasons. College-age EAs have the chance to be flown around the world to conferences, invited to all-expenses-paid retreats and offered free dinners as an incentive for engaging with the community and the content.
As stated before, this is very exciting and a great thing. Generous funding gives us the chance to set ambitious visions to make EA huge on campuses around the world and get the best talent working on the biggest problems. Moreover, it can improve our diversity by making careers such as community-building accessible to people from different socioeconomic backgrounds. But it also risks clouding our judgement as individuals and as a movement.
Consider the case of a college freshman. You read your free copy of Doing Good Better and become intrigued. You explore how you can get involved. You find out that if you build a longtermist group in your university, EA orgs will pay you for your time, fly you to conferences and hubs around the world and give you all the resources you could possibly make use of. This is basically the best deal that any student society can currently offer. Given this, how much time are you going to spend critically evaluating the core claims of longtermism? And how likely are you to walk away if you’re not quite sure? Anecdotally, I’ve spoken to several organisers who aren’t convinced of longtermism but default to following the money nevertheless. I’ve even heard (joking?) conversations about whether it’s worth ‘pretending’ to be EA for the free trip.
When my friends in finance (not earning to give) tell me they’re working at Goldman to improve the world, I am normally sceptical. Psychology literature on motivated reasoning and confirmation bias suggests that we are excellent at finding views which justify whatever is in our interests. For example, one study shows that our moral judgements can be significantly altered by financial incentives; another shows that we naturally strengthen our existing views by holding confirming and disconfirming evidence to different standards.
Fortunately, unlike with finance careers, I think that longtermist careers are likely to be among the most impactful available to us. But given the financial incentives, I would expect it to be very difficult to notice if either longtermism as a whole or specific spending decisions turned out to be wrong. Research suggests that when a lot of money is on the line, our judgement becomes less clear. It really matters that the judgement of EAs is clear, so having a lot of money on the line should be cause for concern.
This is especially problematic given the nature of longtermism, simultaneously the best-funded area of EA and also the area with the most complex philosophy and weakest feedback loops for interventions.
Maybe this risk is mitigated by the fact that grantmakers in EA set these incentives by deciding where the money goes, and their judgements are careful and well-calibrated from years of experience, evaluation and excellent in-house research. This seems plausible to me. But if strong incentives are shifting our epistemic confidence from the movement as a whole to a small number of grantmakers, this is something we should at least notice.
What can we do differently?
I’m really not sure what the answer to this is, especially because I think most of these funding opportunities seem very good, so we shouldn’t stop them. I’m mainly putting this out there to start a conversation because I’m not sure how aware we are of these dynamics (I wasn’t until recently and others seem to think it is a concern which isn’t discussed enough, perhaps for some of the reasons stated above).
A few initial thoughts, not proposed with particular confidence:
Can we create better resources for how to talk about the spending when it comes up, just like we have for substantive objections to EA? For example, accessible posts on why retreats / conferences / free dinners are considered good value for money under rigorous evaluative frameworks.
(From Andrew) Along these lines, it could be valuable for university groups to conduct and publish some rough cost-benefit analyses on major programs (e.g. running a retreat, budgeting for socials, book and cookie giveaways, deciding whether to get an office). This is probably a good exercise for general EA thinking, but it might also help reduce some wastefulness by making EA groups think more about how they use money.
A counter would be that this process takes time which could be spent on directly valuable activities—though for the reasons stated above, we should perhaps be sceptical of arguments which justify spending without thinking.
It would be helpful to lay out clearly what money is available to which parts of the EA movement and what it can and can’t do. This would help clarify questions such as: “if EA has more money than it can spend effectively, why isn’t it giving more to AMF / why is it still encouraging people to donate to AMF / why can’t it just solve biorisk through brute financial force”. This post is a great start.
We should be careful with how we advertise EA funding. For example, we should avoid the framing of ‘people with money want to pay for you to do X’ and replace this with an explanation of why X matters a lot and why we don’t want anyone to be deterred from doing X if the costs are prohibitive.
Given the unilateralist’s curse, perhaps there should be some central forum for EA funders to coordinate / agree upon policies with an optics perspective in mind. Maybe this is already happening—I am certainly not well-placed to assess the ecosystem.
(From Joe) Where appropriate, it should be made clear that grants aren’t conditional on agreement with the community. Funding criticism is a great start, but many people receiving grants (e.g. for travel) may still feel that there’s an implicit expectation for them to agree with the funder’s view, and we should make it clearer to people when this is not the case.
Note, for example, that people who receive EA funding may find it more difficult to publish a critical piece like this, given the benefits which they derive from the status quo, perceptions of hypocrisy and feelings of betrayal towards the people funding them. As more EAs come to benefit from EA funding, this problem may grow.
In this vein and if we think this is a big enough concern, perhaps we should encourage more criticism specifically relating to how funding is deployed?
Should we re-emphasise the norm of significant giving? Money donated to top global health / animal welfare charities can still do a huge amount of good and taking this seriously as a community would help us avoid the mindset whereby the most impactful things we can do involve taking money rather than giving.
A counter is that this may distract from other longtermist priorities which are much more valuable, but it might help with both optics and epistemics.
(From Joe) At the very least, we should make the opportunity cost of funding more salient. EA was predicated on recognising the trade-offs inherent to altruistic decisions, and we shouldn’t forget that every ~$5,000 spent on speculative longtermist initiatives statistically costs a life in the short term. This is a significant responsibility which we shouldn’t take lightly, yet current free-spending norms point the other way.
Although we should often be willing to accept time-money trade-offs, there are some cases where norm shifts could go along way, such as putting students up in cheaper hotels, booking flights further in advance, or selecting cheaper flights where inconvenience is minimal (rather than treating money as no object).
While this wouldn’t necessarily change our actions significantly, having a culture where this is collectively acknowledged would reduce the problematic impression that we’ve stopped appreciating the value of money.
Do you agree with the problems I’ve raised? If so, how do you think we can mitigate them?
- We can all help solve funding constraints. What stops us? by 18 Jun 2023 23:30 UTC; 381 points) (
- Some clarifications on the Future Fund’s approach to grantmaking by 10 May 2022 2:42 UTC; 327 points) (
- Doing EA Better by 17 Jan 2023 20:09 UTC; 261 points) (
- What might FTX mean for effective giving and EA funding by 15 Nov 2022 13:24 UTC; 204 points) (
- Why Altruists Can’t Have Nice Things by 21 Jun 2023 3:43 UTC; 182 points) (
- The biggest risk of free-spending EA is not optics or motivated cognition, but grift by 14 May 2022 2:13 UTC; 180 points) (
- How EA is perceived is crucial to its future trajectory by 23 Jul 2022 3:24 UTC; 162 points) (
- EA Forum Lowdown: April 2022 by 1 May 2022 14:48 UTC; 154 points) (
- Emphasizing emotional altruism in effective altruism by 5 Jul 2022 19:47 UTC; 148 points) (
- The EA movement’s values are drifting. You’re allowed to stay put. by 24 May 2022 0:31 UTC; 136 points) (
- EA needs money more than ever by 25 Apr 2022 15:38 UTC; 119 points) (
- Remuneration In Effective Altruism by 25 Jul 2022 12:35 UTC; 112 points) (
- My GWWC donations: Switching from long- to near-termist opportunities? by 23 Apr 2022 17:22 UTC; 108 points) (
- Four categories of effective altruism critiques by 9 Apr 2022 15:48 UTC; 99 points) (
- Posts from 2022 you thought were valuable (or underrated) by 17 Jan 2023 16:42 UTC; 87 points) (
- 9 Nov 2022 17:39 UTC; 87 points) 's comment on FTX Crisis. What we know and some forecasts on what will happen next by (
- Reflections on Wytham Abbey by 10 Jan 2023 18:30 UTC; 82 points) (
- Big EA and Its Infinite Money Printer by 29 Apr 2022 17:57 UTC; 71 points) (
- High Impact Medicine, 6 months later - Update & Key Lessons by 28 May 2022 15:24 UTC; 59 points) (
- Future Matters #1: AI takeoff, longtermism vs. existential risk, and probability discounting by 23 Apr 2022 23:32 UTC; 57 points) (
- Future Matters #2: Clueless skepticism, ‘longtermist’ as an identity, and nanotechnology strategy research by 28 May 2022 6:25 UTC; 52 points) (
- A retroactive grant for creating the HPMoR audiobook (Eneasz Brodski)? by 3 May 2022 14:50 UTC; 39 points) (
- The dangers of high salaries within EA organisations by 10 Jun 2022 7:54 UTC; 38 points) (
- The Effective Altruism culture by 17 Apr 2022 1:23 UTC; 35 points) (
- What are some measurable proxies for EA community health? by 14 Jul 2022 6:26 UTC; 32 points) (
- 30 Jan 2023 21:24 UTC; 32 points) 's comment on Karma overrates some topics; resulting issues and potential solutions by (
- Crowdsourced Criticisms: What does EA think about EA? by 8 Aug 2022 22:59 UTC; 32 points) (
- Criticism of EA Criticisms: Is the real disagreement about cause prio? by 2 Sep 2022 12:15 UTC; 30 points) (
- 9 Jun 2022 9:21 UTC; 23 points) 's comment on Deference Culture in EA by (
- 22 Sep 2022 3:45 UTC; 23 points) 's comment on Why Wasting EA Money is Bad by (
- 18 Apr 2022 13:10 UTC; 20 points) 's comment on FTX/CEA—show us your numbers! by (
- Would people like to see “curation comments” on posts with high numbers of comments? by 17 Apr 2022 4:40 UTC; 18 points) (
- 25 Oct 2023 5:01 UTC; 15 points) 's comment on How has FTX’s collapse impacted EA? by (
- 5 Feb 2023 17:12 UTC; 13 points) 's comment on [No Longer Endorsed] The EA Forum should remove community posts from search-engine indexing. by (
- Is working for Meta a good or bad option? by 17 Apr 2022 18:10 UTC; 9 points) (
- 16 May 2022 12:13 UTC; 8 points) 's comment on Deferring by (
- 19 Dec 2022 16:54 UTC; 5 points) 's comment on Setting our salary based on the world’s average GDP per capita by (
- Promoting climate considerations within existing high priority areas of work by 27 Jul 2022 15:27 UTC; 2 points) (
One thing that bugged me when I first got involved with EA was the extent to which the community seemed hesitant to spend lots of money on stuff like retreats, student groups, dinners, compensation, etc. despite the cost-benefit analysis seeming to favor doing so pretty strongly. I know that, from my perspective, I felt like this was some evidence that many EAs didn’t take their stated ideals as seriously as I had hoped—e.g. that many people might just be trying to act in the way that they think an altruistic person should rather than really carefully thinking through what an altruistic person should actually do.
This is in direct contrast to the point you make that spending money like this might make people think we take our ideals less seriously—at least in my experience, had I witnessed an EA community that was more willing to spend money on projects like this, I would have been more rather than less convinced that EA was the real deal. I don’t currently have any strong beliefs about which of these reactions is more likely/concerning, but I think it’s at least worth pointing out that there is definitely an effect in the opposite direction to the one that you point out as well.
Precisely. Also, the frugality of past EA creates a selection effect, so probably there is a larger fraction of anti-frugal people outside the community (and among people who might be interested) than we would expect from looking inside it.
My anecdotal experience hiring is that I get many more prospective candidates saying something like “if this is so important why isn’t your salary way above market rates?” than “if you really care about impact, why are you offering so much money?” (Though both sometimes happen.)
I agree that it’s possible to be unthinkingly frugal. It’s also possible to be unthinkingly spendy. Both seem bad, because they are unthinking. A solution would be to encourage EA groups to practice good thinking together, and to showcase careful thinking on these topics.
I like the idea of having early EA intro materials and university groups that teach BOTECs, cost-benefit analysis, and grappling carefully with spending decisions.
This kind of training, however, trades off against time spent learning about eg. AI safety and biosecurity.
Great point! I think each spending strategy has its pitfalls related to signalling.
I think this correlates somewhat with people’s knowledge/engagement with economics, and political lean. The “frugal altruism” will probably attract more left leaning people, while “spending altruism” probably attracts more right leaning people
1) One way to see the problem is that in the past we used frugality as a hard-to-fake signal of altruism, but that signal no longer works.
I’m not sure that’s an entirely bad thing, because frugality seems mixed as a virtue e.g. it can lead to:
Not spending money on clearly worth it things (e.g. not paying to have a larger table at a student fair even when it would result in more sign ups; not getting a cleaner when you earn over $50/hour), which in turn can also make us seem not serious about maximising impact (e.g. this comment).
Even worse, getting distracted from the top priority by worrying about efforts to save relatively small amounts of money. Or not considering high upside projects that require a lot of resources, but where there’s a good chance of failure, due to a fear of not being able to justify the spending.
Feelings of guilt around spending and not being perfectly altruistic, which can lead to burn out.
Filtering out people who want a normal middle class lifestyle & family, but could have had a big impact (and go work at FAANG instead). Filtering out people from low income backgrounds or with dependents.
However, we need new hard-to-fake signals of seriousness to replace frugality. I’m not sure what these should be, but here are some alternative things we could try to signal, which seem closer to what we most care about:
That we nerd out hard about doing good.
Intense focus on the top priority.
Doing high upside things even if there’s a good chance they might not work out and seem unconventional.
Giving 10% (or more) (which is compatible with non-frugality)
The difficulty is to think of hard-to-fake and easy-to-explain ways to show we’re into these.
2) Another way to see the problem is that in the past we’ve used the following idea to get people into EA: “you can save a life for a few thousand dollars and should maximise your donations to that cause”. But this idea is obviously in tension with the activities that many see as the top priorities these days (e.g. wanting to convince top computer scientists to work on the AI alignment problem).
My view is that we should try to move past this way of introducing effective altruism, and instead focus more on ideas like:
Let’s do the most we can to tackle big, neglected global problems. (I’d probably start by introducing climate change and/or pandemics rather than global health.)
Find high-upside projects that help tackle the biggest bottlenecks in those problems.
If you want to do good, do it effectively, and focus on the highest-leverage ways you can help (but ~no-one is perfectly altruistic and it’s fine to have a nice life too).
Agree.
Fully agree we need new hard-to-fake signals. Ben’s list of suggested signals is good. Other things I would add are vegan and cooperates with other orgs / other worldviews. But I think we can do more as well as increase the signals. Other suggestions of things to do are:
Testing for altruism in hiring (and promotion) processes. EA orgs could put greater weight on various ways to test or look for evidence of altruism and kindness in their hiring processes. There could also be more advice and guidance for newer orgs on the best ways to look for and judge this when hiring. Decisions to promote staff should seek feedback from peers and direct reports.
Zero tolerance to funding bad people. Sometimes an org might be tempted to fund or hire someone they know / have reason to expect it is a bad person or primarily seeking power or prestige not impact. Maybe this person has relevant skills and can do a lot of good. Maybe on a naïve utilitarian calculus it looks good to hire them as we can pay them for impact. I think there is a case to be heavily risk adverse here and avoid hiring or funding such people.
Accountability mechanisms. Top example: external impact reviews of organisations. This could provide a way to check for and discourage any corruption / excess / un-cooperativeness. Maybe an EA whistleblowing system (but maybe not needed). Maybe more accountability checking and feedback for individuals in senior roles in EA orgs (not so sure about this, as it can backfire).
So far the community seems to be doing well. Yet EA is gaining resources and power. And power has been known to corrupt. So lets make sure we build in mechanisms so that doesn’t happen to our wonderful community.
(Thanks to others in discussion for these ideas)
[edited]
Random but in the early days of YC they said they used to have a “no assholes” rule, which mean they’d try to not accept founders who seemed like assholes, even if they thought they might succeed, due to the negative externalities on the community.
Seems like a great rule. Do you know why they don’t have this rule anymore? (One plausible reason: The larger your community gets the harder such a rule is to implement, which would means this wouldn’t (anymore) be a feasible for the EA community.)
Hey, do you happen to know me in real life and would be willing to talk about these issues offline?
I’m asking because it seems unlikely you will be able to be more specific publicity (but it would be good if you were and were to just write here) and so it would be good to talk about the specific examples or perceptions in a private setting.
I know someone who went to EAG who is sort of skeptical and looks for these things, but they didn’t see a lot of bad things at all.
(Now, a caveat is that selection is a big thing. Maybe a person might miss these people for various idiosyncratic factors).
But I’m really skeptical about major issues and in the absence of substantive issues (which by the way, doesn’t need hard data to establish), it seems negative EV to generate alot of concern or use language.
One issue is that problems are self fulfilling, you start pointing a lot about bad actors in a vague way and you’ll find that you start losing the benefits of the community. As long as these people don’t enter senior levels or community building roles you’re pretty good.
Another issue is that trust networks are how these issues are normally solved, and yet there’s pressure to open these networks, which runs into the teeth of these issues.
To be clear, I’m saying that this funding and trust problem is probably being worked on. Having a lot noise about this issue or people poking the elephant or just having bad vibes, but not substantiated, can be net negative.
Thank you for the comment. I edited out the bit you were concerned about as that seemed to be the quickest/easiest solution here. Let me know if you want more changes. (Feel free to edit / remove your post too.)
Hi, this is really thoughtful. In the principle of being consonant with your actions in your reply, following your lead, I edited my post.
However, I didn’t intend to create an edit to this thread and I especially did not intend to undo discussion.
It seems more communication is good.
It seems like raising the issue is good, as long as that is balanced with good judgement and proportionate action and beliefs. It seems like a good action was to understand and substantiate or explore issues.
Part of me is a bit sad that community building is now a comfortable and status-y option. The previous generation of community builders had a really high proportion of people who cared deeply about these ideas, were willing to take weird ideas seriously and often take a substantial financial/career security hit.
I don’t think this applies to most of the current generation of community builders to the same degree and it just seems like much more of a mixed bag people wise. To be clear I still think this is good on the margin, I just trust the median new community builder a lot less (by default).
Interesting! I work in CB full-time (Director of EA Germany), and my impression is still that it’s challenging work, pays less than what I and my peers would earn elsewhere and most of the CB roles still have a lot less status than e.g. being a researcher who gets invited to give talks etc.
Do you think some CBs are motivated by money or status? What makes you think so? I’m genuinely curious (though no worries if you don’t feel like elaborating).
I think I am mostly comparing to how different my impression of the landscape of a few years ago is to today’s landscape.
I am mostly talking about uni groups (I know less about how status-y city groups are), but I there were certainly a few people putting in a lot of hours for 0 money and not much recognition from the community for just how valuable their work was. I don’t want to name specific people I have in mind, but some of them now work at top EA orgs or are doing other interesting things and have status now, I just think it was hard for them to know that this is how it would pan out so I’m pretty confident they are not particularly status motivated.
I’m also pretty confident that that most community builders I know wouldn’t be doing their job on minimum wage even if they thought it was the most impactful thing they could do. That’s probably fine, I just think they are less ‘hardcore’ than I would like.
Also being status motivated is not neccesarilly a bad thing, I’m confused about this but it’s plausibly a good thing for the movement to have lots of status motivated people to the degree that we can make status track the right stuff. I am sure that part of why I am less excited about these people is a vibes thing that isn’t tracking impact.
Something I like about “Doing high upside things even if there’s a good chance they might not work out and seem unconventional” as a mark of seriousness is that it’s its own form of sacrifice: being willing to look weird and fail and give up on full security and job comfort and do something hard because it’s positive EV.
In your list of new hard-to-fake signals of seriousness I like.
I think that this is underrated and as a community, we overemphasise actually achieving things in the real world meaning if you want to get ahead within EA it often pays to do the medium right but reasonable thing over the super high EV thing, as the weird super high EV thing probably won’t work.
I’m much more excited when I meet young people who keep trying a bunch of things that seem plausibly very high value and give them lots of information relative to people that did some ok-ish things that let them build a track record/status. Fwiw I think that some senior EAs do track these high EV high-risk things really well, but maybe the general perception of what people ought to do is too close to that of the non-EA world.
I would expect detrimental effects if nerding out became even more of a paid-attention-to signal. It’s something you can do endlessly without ever helping a person. But maybe you just mean “successfully making valuable intellectual contributions”, in which case I agree.
Agreed. There seems to be what I can best call an intellectual aesthetic that drives about 1⁄2 instances of “nerding out” that I observe in the [East] Bay Area. The contrast between the Bay Area attitude and the Oxford attitude, the latter of which I guess applies to Ben Todd, has continually surprised me, and this variable of location may be dispositive over whether “nerding out” is evidence of desirable character.
Thanks, I thought this was the best-written and most carefully argued of the recent posts on this theme.
Extra ideas for the idea list:
Altruistic perks, rather than personal perks. E.g.1. Turn up at this student event and got $10 donated to a charity of your choice. E.g.2. donation matching schemes mentioned in job adverts, perhaps funded by offering maybe slightly lower salaries. Anecdotally I remember the first EAish event I went to had money to charity for each attendee and free wine and it was the money to charity that attracted me to go, and free wine that attracted my friend, and I am still here and they are not involved.
Frugality options, like an optional version of the above idea. E.g.1. when signing up to an EA event the food options could be: “[]vegan, []nut free, []gluten free, []frugal—will bring my own lunch please donate money saved to charity x”. E.g.2. Jobs could advertise the organisation offers salary sacrifice schemes that some employees take. I don’t know how well this would work but would be interested to see a group try. Anecdotally I know some EAs in well paid jobs take lower salaries than they are offered but I don’t think this is well known.
Also for what it is worth I was really impressed by the post. I it was an very well written, clear, and transparent discussion of this topic with clear actions to take.
I would love frugality options!
+1, the frugality options seem like a nice way to “make the opportunity cost of funding more salient” without necessarily requiring huge changes from event organizers.
+1 here as well, frugality option would be an amazing thing to normalize, especially if we can get it going as a thing beyond the world of EA (which may be possible if we get some good reporting on it).
+1. One concrete application: Offer donation options instead of generous stipends as compensation for speaking engagements.
I worry that it’d feel pretty fake for people who actually care about counterfactual impact. Money goes from EA sources to EA sources both ways.
Most EAs I’ve met over the years don’t seem to value their time enough, so I worry that the frugal option would often cost people more impact in terms of time spent (e.g. cooking), and it would implicitly encourage frugality norms beyond what actually maximizes altruistic impact.
That said, I like options and norms that discourage fancy options that don’t come with clear productivity benefits. E.g. it could make sense to pay more for a fancier hotel if it has substantially better Wi-Fi and the person might do some work in the room, but it typically doesn’t make sense to pay extra for a nice room.
I think I agree with this. I think if I look historically at my mistakes in spending money, there was very likely substantially more utility lost from spending too little money rather than spending too much money.
To be more precise, most of my historical mistakes do not come from consciously thinking about time-money tradeoffs and choosing money instead of time (“oh I can Uber or take the bus to this event but Uber is expensive so I should take the bus instead”) but from some money-expensive options not being in my explicit option set to prioritize in the first place (“oh taking the bus will take four hours total so I probably shouldn’t attend the event”) .
As I get in the habit of explicitly valuing my time often and trying to consider ways to buy time, I notice more and more options that my younger (and poorer) self would not even consider to be in the option set (e.g. international flights to conferences, cleaners, ordering food, paying money to alleviate bureaucracy hurdles, etc). Admittedly this coincided with the EA movement generally being much more spendthrift (and also there being far more resources now on time-money tradeoffs for people in my reference class) so it’s plausible younger EAs don’t have to go through the same mental evolutions to get the same effect.
I’m going through this right now. There have just clearly been times both as a group organiser and in my personal life when I should have just spent/taken money and in hindsight clearly had higher impact, e.g buying uni textbooks so I study with less friction to get better grades.
I know this isn’t the only thing to track here, but it’s worth noting that funding to GiveWell-recommended charities is also increasing fast, both from Open Philanthropy and from other donors. Enough so that last year GiveWell had more money to direct than room for more funding at the charities that meet their bar (which is “8x better than cash transfers”, though of course money could be donated to things less effective than that). They’re aiming to move 1 billion annually by 2025.
True, but GiveWell doesn’t expect funding to grow at the same rate as top quality funding opportunities, so that $1bn/year is going to need further donors. Unless we believe GiveWell’s top programmes/charities will never have a funding shortfall again, the point about where EA prioritises its funding still seems relevant.
Donating to AMF still seems like a good benchmark for cost effectiveness. Unlike George, my instinct is that e.g. a team retreat for an EA Group is likely to produce considerably less impact than spending the money on bednets or other GiveWell top charities.
In the spirit of trying to really engage with the question and figure out ground truth, maybe it’s worth making a quick CBA or guesstimate model based on your general views for “Unlike George, my instinct is that e.g. a team retreat for an EA Group is likely to produce considerably less impact than spending the money on bednets or other GiveWell top charities” and then we can debate specifics and maybe come to better heuristics about this kind of thing. I’d be excited to see what numbers your intuition puts on things.
Completely agree. I will write something about this tomorrow
I’ve seen the time-money tradeoff reach some pretty extreme, scope-insensitive conclusions. People correctly recognize that it’s not worth 30 minutes of time at a multi-organizer meeting to try to shave $10 off a food order, but they extrapolate this to it not being worth a few hours of solo organizer time to save thousands of dollars. I think people should probably adopt some kind of heuristic about how many EA dollars their EA time is worth and stick to it, even when it produces the unpleasant/unflattering conclusion that you should spend time to save money.
Also want to highlight “For example, we should avoid the framing of ‘people with money want to pay for you to do X’ and replace this with an explanation of why X matters a lot and why we don’t want anyone to be deterred from doing X if the costs are prohibitive” as what I think is the most clearly correct and actionable suggestion here.
I agree we should be careful with the “spend money to save time” guideline. It can be self-serving because spending time to save money can be unpleasant.
Also, there is the danger that you get used to the luxury of spending money to save time. If your situation changes, or need to update your estimate of the value of your time to a lower value, you should be willing to spend the time and not the money! (I hope this does not happen to you, but it may happen e.g. you need to move to your career plan B/C/Z)
This also applies to other luxuries.
This is a valuable point.
Man, I find it so difficult (on, like, an emotional level) to think clearly about the dollar value of an hour of my time (I feel like it is overvalued?? because so many people make so much less money than me, a North American???) but I agree that adopting some kind of clear heuristic here is good, and that I should more frequently be doing explicit trades of “I will spend up to 2 hours on trying to find a cheaper option, because I think in expectation that’s worth $60”.
You might be aware of this but for others reading - there’s a calculator to help you work out the value of your time.
I think it’s worth doing once (and repeating when your circumstances change, e.g. new job), then just using that as a general heuristic to make time-money tradeoffs, rather than deliberating every time.
If a community claims to be altruistic, it’s reasonable for an outsider to seek evidence: acts of community altruism that can’t be equally well explained by selfish impulses, like financial reward or desire for praise. In practice, that seems to require that community members make visible acts of personal sacrifice for altruistic ends. To some degree, EA’s credibility as a moral movement (that moral people want to be a part of) depends on such sacrifices. GWWC pledges help; as this post points out, big spending probably doesn’t.
One shift that might help is thinking more carefully about who EA promotes as admirable, model, celebrity EAs. Communities are defined in important ways by their heroes and most prominent figures, who not only shape behaviour internally, but represent the community externally. Communities also have control over who these representatives are, to some degree: someone makes a choice over who will be the keynote speaker at EA conferences, for instance.
EA seems to allocate a lot of its prestige and attention to those it views as having exceptional intellectual or epistemic powers. When we select EA role models and representatives, we seem to optimise for demonstrated intellectual productivity. But our selections are not necessarily the people who have made the greatest personal altruistic sacrifices. Often, they’re researchers who live in relative luxury—even if they’ve taken a GWWC pledge. Perhaps we should be more conscious to elevate the EA profile of people like those in MacFarquhar’s Strangers Drowning : people who have made exceptional sacrifices to make the world better, rather than people who have been most successful at producing EA-relevant intellectual output. Maybe the keynote speaker at the next EA conference should be someone who once undertook an effective hunger strike, say. (Maybe even regardless of whether they have heard of EA, or consider themselves EA.)
There’s an obvious reason to instead continue EA’s current role model selection strategy: having a talk from a really clever researcher is helpful for internal community epistemics. We want to grant speaking platforms to those who might be able to offer the most valuable information or best thought-through view. And it’s valuable for the external reputation of our community epistemics to have such people be the face of EA. We also don’t want to promote the idea that the size of one’s sacrifice is what ultimately matters.
But there are internal and external reasons to choose a role model based on the degree of inspiring altruistic sacrifice that person has made, too. Just as Will MacAskill can make me a little more informed, or guide my thinking in a slightly better direction, an inspiring story of personal sacrifice can make me a little more dedicated, a little more willing to work hard and sacrifice to make the world better. And externally, such a role model signals community focus on altruistic commitment.
My low-confidence guess is that the optimum allocation of prestige still gives most EA attention and admiration to those with greatest demonstrated intellectual or epistemic power—but not all. Those who’ve demonstrated acts of moral sacrifice should be held up as exemplars too, especially in external-facing contexts.
This is a very interesting point that, for me, reinforces the importance of keeping effective giving prominent in EA. It is both a good thing, and also a defence against accusations of self-serving wastefulness, if a lot of people in the community are voluntarily sacrificing some portion of their income (with the usual caveats about ’if you have actual disposable income).
GWWC, OFTW etc. may be doing EA an increasing favour by enlisting a decent proportion of the community to be altruistic.
It’s also noticeable that giving seems to be least popular with longtermists, who also seem to be doing the most lavish spending.
Many people prominent in EA still donate very large percentages, Julia Wise (featured in Strangers Drowning)/Jeff Kaufman 50%, Will MacAskill at least 50%, probably the same for Peter Singer and Toby Ord.
I was at an EA party this year where there was definitely an overspend of hundreds of pounds of EA money on food which was mostly wasted. As someone who was there, at the time, this was very clearly avoidable.
It remains true that this money could have changed lives if donated to EA charities instead (or even used less wastefully towards EA community building!) and I think we should view things like this as a serious community failure which we want to avoid repeating.
At the time, I felt extremely uncomfortable / disappointed with the way the money was used.
I think if this happened very early into my time affiliated with EA, it would have made me a lot less likely to stay involved—the optics were literally “rich kids who claim to be improving the world in the best way possible and tell everyone to donate lots of money to poor people are wasting hundreds of pounds on food that they were obviously never going to eat”.
I think this happened because the flow of money into EA has made the obligations to optimise cost-efficiency and to think counterfactually seem a lot weaker to many EAs. I don’t think the obligations are any weaker than they were—we should just have a slightly lower cost effectiveness bar for funding things than before.
I had exactly the same thought in an identical-sounding situation. I felt incredibly uncomfortable, and someone at the party pointed out to me that these kinds of spending habits really alienate young EAs from less privileged backgrounds who aren’t used to ordering pricey food deliveries whenever they feel like it
I think that it is worth separating out two different potential problems here.
1. It is bad that we wasted money that could have directly helped people.
2. It is bad that we alienated people by spending money.
I am much more sympathetic to (2) than (1).
Maybe it depends on the cause area but the price I’m willing to pay to attract/retain people who can work on meta/longtermist things is just so high that it doesn’t seem worth factoring in things like a few hundred pounds wasted on food.
I think if we value longtermist/meta community building extremely highly, that’s actually a strong reason in favour of placing lots of value on that couple hundred of pounds—in this kind of scenario, a lot of the counterfactual use of the money would be using it usefully towards longtermist / meta community building.
I think another framing here is that:
1) wasting hundreds of pounds of money on food is multiple orders of magnitude away from the biggest misallocation of money within EA community building,
2) All misallocations of money within EA community building is lower than misallocations of money caused by donations that were wasted by donating to less effective cause areas (for context, Open Phil spent ~200M in criminal justice reform, more than all of their EA CB spending to date), and
3) it’s pretty plausible that we burned much more utility from failure to donate/spend enough rather than via donating too much to wasteful things, so looking at the “visible” waste is ignoring the biggest source of resource misallocation.
For what it’s worth, even though I prioritize longtermist causes, reading
made me fairly uncomfortable, even though I don’t disagree with the substance of the comment, as well as
Yeah I’d mostly agree with this framing.
I don’t mean to imply that this party was one of the worst instances in EA of money being wasted, just that I was there, felt pretty uncomfortable, optics were particularly bad (compared to donating to something not very effective), and it made me concerned about how EAs are valuing cost-effectiveness and counterfactuals.
I agree that it’s important to not let the perfect be the enemy of the good, and it’d be bad to not criticize X just because X isn’t the literal most biggest issue in the movement. But otoh some sense of scale is valuable (at least if we’re considering the object level of resource misallocation and not just/primarily optics).
Like if 30 EAs are at a party, and their time is conservatively valued at $100/h, the party is already burning >$50/minute, just as another example. Hopefully that time is worth it.
This is probably a bit of an aside, but I don’t think that is a valid way to argue about the value of time for people: It seems quite unlikely to me that instead of going to an EA party those people would actually have done productive work with a value of $100/h. You only have so many hours that you can actually do productive work and the counterfactual of going to this party would more likely be those people going to a (non-EA) party, going for dinner with friends, spending time with family, relaxing, etc than actually doing productive work.
Even free time has value: maybe people would by default talk about work in their free time, or relax in a more optimal way than partying, thus making them more productive. So a suboptimal party can still waste lots of value in ways other than taking hours away from work. Given this, there are many people whose free time should be valued at >$100/h.
Fair point, that’s a reasonable callout. I think elasticity here is likely between 0 and 1, so really you should apply some discount, say maybe 30% of the counterfactual is productive work time for example? So we get to >$30/h per person and >$15/min for the party, in the above Fermi.
(As an aside, at least for me, I don’t find EA parties particularly relaxing, except relatively small ones where I already know almost everybody)
Also with regards to longtermist stuff in particular, I think there’s a risk of falling into “the value of x-risk prevention is basically infinite, so the expected value of any action taken to try and reduce x-risk is also +infinity” reasoning.
I think this kind of reasoning risks obscuring differences in cost-effectiveness between x-risk mitigation initiatives which do exist and which we should take seriously because of other counterfactual uses of the money and because we don’t have unlimited resources.
(There’s a chance I’m badly rephrasing complicated philosophy debates around fanaticism, pascals mugging, etc here but I’m not sure)
I agree with you that this is clearly dumb! I don’t think calebp is making that mistake in the comment above however.
Apologies if I misinterpreted calebp’s comment, but I would paraphrase it as “the expected value of a longtermist EA community building event is infinite, and remains infinite with £200 being wasted on uneaten food, so we shouldn’t worry about the lost expected value from overspending on food by £200.”
I think that is a pretty uncharitable view.
I would say that it’s obviously not viewed as “infinite”, but orders of magnitude higher than £200. I’m sure calebp and most members of the community would definitely worry at £200,000 of wasted food.
I don’t think this is right because there’s aren’t good mechanisms to convert money into utility. I don’t think there are reasonable counterfactuals to this money that aren’t already maxed out.
That said f you can point to some actions that should get a few hundred pounds in Lt community building that aren’t due to a lack of money and seem positive in EV, I’d be happy to fund these actions (in a personal capacity).
I think more money to AMF / GiveDirectly/ StrongMinds are pretty good mechanisms to convert money into utility.
I also think it’s very difficult for counterfactuals to become maxed out, especially in any form of community building.
One concrete action—pay a random university student in London who might not be into EA but could do with the money to organise a dinner event and invite EAs interested in AI safety to discuss AI safety. I think this kind of thing has very high EV, and these kind of things seem very difficult to max out (until we reach a point, where say, there are multiple dinners everyday in London to discuss AI Safety).
I think one cool thing about some aspects of community building is that they can only ever be constrained by funding, because it seems pretty easy to pay anyone, including people who don’t care about EA, to do the work.
Re the AI Safety dinners—seems like a cool project could just be hiring someone to full time coordinate facilitating such dinners: inviting people and grouping them, logistics, suggesting structures for discussion, inviting special guests etc. Is this something that’s being worked on? Or is anyone interested in doing it?
Wondering if there could be tie-in with the AGI Safety Fundamentals course. e.g. the first step is inviting a broad range of people (~1000-10000) to a dinner event (that is held at multiple - ~100? - locations around the world within a week). Then those who are interested can sign up for the course (~1000).
I meant from a LT worldview.
Have you tried this, I wouldn’t predict this going very well. I also haven’t heard of any community builders doing this (but I of course don’t know all the community builders)?
I agree that this kind of dinner could be a good use of funding but the specific scenario your described isn’t obviously positive EV (at least to me). I’d worry about poorly communicating EA, low quality of conversation due to the average attendee not being very thoughtful (if the attendees are thoughtful then it is probably worth more CB time).
You also need to worry about free dinners making us look weird (like in the OP). I think that promoting/inviting people that might make the event go well is going to require a CB as opposed to just a random person. Alternatively, the crux could be that we actually do have similar predictions of how the event would go and have different views on how valuable the event is at some level of quality.
This is really far from my model of LT community building, I would love it if you were right though!
Yeah it’s hard to tell whether we disagree on the value of the same quality of conversation or on what the expected quality of conversation is.
Just to clarify though, I meant inviting people who are both already into EA and already into AI Safety, so there wouldn’t be a need to communicate EA to anyone.
I also don’t actually know if anyone has tried something like this—I think it would be a good thing to try out.
To me, the most important issue that this (and other comments here) raises is that, as a community, we don’t yet have a good model of how an altruist who (rationally/altruistically) places a very high value on their time should actually act. Or, for that matter, how they shouldn’t.
I realize the discussion here is broader than this specific case, but for this specific case, couldn’t people have just taken the extra food home so it would not go to waste?
Actually, yes that would have made a lot of sense, not sure why this didn’t happen.
Thanks for this clear write-up and as many others, I definitely share some of your worries. I liked it that you wrote that the extra influx of money could make the CB-position accessible to people from different socioeconomic backgrounds, since this point seems to be a bit neglected in EA discussions.
I think it is true for many other impactful career paths that decent wages and/or some financial security (e.g. smoothening career transitions with stipends) could help to widen the pool of potential applicants, e.g. to more people from less fortunate socioeconomic backgrounds. Don’t forget that many people in the lower and lower-middle income class are raised with the idea that it is important to take care of your own financial security. I have plenty of anecdotes from people in that group that didn’t pursue an EA career in the past, because the wage gap and the worries about financial insecurity were just too large. I see multiple advantages coming from widening the pool to people from lower / lower middle socioeconomic classes:
Given that there is also a lot of talent in lower / lower middle socioeconomic classes, you will finally be able to attract more of them. This will increase the overall talent level in the community.
It could make the EA community less “elitist”, which has many instrumental advantages as well, e.g. on the public perception. In my collaborations with third parties outside of the EA movement, I often receive questions on TFG’s / EA’s stance on Diversity, Equity, and Inclusion. Having a less elitist movement would make it easier to collaborate with parties outside of the movement.
Diversity in terms of backgrounds could lead to a larger diversity of thought and this could potentially help us find new cause areas or improve our understanding of causes like poverty.
Adding on: Increasing EA spending in certain areas could certainly support diversity, but it could have the opposite effect elsewhere.
I’m concerned that focusing community-building efforts at elite universities only increases inequality. I’m guessing that university groups do much of the recruiting for all-expenses-paid activities. In practice, then, students at elite universities will benefit, while students at state schools and community colleges won’t even hear about these opportunities. So the current EA community-building system quite accurately selects for privileged students to give money to.
Curious about any work to change this pattern!
This is a great point. The good news is your concern is shared by CEA and others. It’s very exciting to see the work that Jessica McCurdy at CEA (and others) are doing to support the growth of EA groups at economically diverse R1 universities and smaller colleges, etc.
EAIF has also funded a small project to try and support groups at so-called “Public Ivies” in the U.S., with a special focus on public honors colleges that can contribute to socioeconomic diversity in EA. Feel free to DM if you’re interested in this broader opportunity area, whether in the context of North America / other OECD member countries—or in the context of other regions of the world!
Thanks for writing this! Especially agree with: “We should be careful with how we advertise EA funding. For example, we should avoid the framing of ‘people with money want to pay for you to do X’ and replace this with an explanation of why X matters a lot and why we don’t want anyone to be deterred from doing X if the costs are prohibitive.”
I’ve had a good experience with framing decisions around (reasonable) costs not getting in the way of high-impact work — not only from the perspective of optics, but also as a heuristic for where to draw boundaries (e.g. where to draw the line on what salaries to offer).
I think a lot of points in this post are very valid and concerning to me. I hope they will be taken seriously.
These points concern me too. When you say, I hope they will be taken seriously, I’m unsure who you have in mind. Taken seriously by who?
I guess mainly FTX, Open Philanthropy, EA Funds, and CEA. I’ve shared the article with relevant people in all of those.
Concrete example affecting me right now: this summer I’m considering internships in mental health, x-risk or global health cause prioritisation, and I’m also considering just doing a bunch of Coursera courses and working on a start up.
I think ideally I would be choosing entirely based on what offers more career capital / is more impactful, but it’s difficult not to be influenced by the fact that one of the internships would pay me £11k more than the other 3.
You should keep in mind that high-earning positions enable a large amount of donations! Money is a lot more flexible in which cause you can deploy it to. In light of current salaries, one could even work on x-risks as a global poverty EtG strategy.
You should be influenced by that! It is evidence for donors thinking that org is more important, and that org thinking you are more important. Prices transmit valuable information.
I think for difficult questions it is helpful to form both an inside view (what do I think) and an outside view (what does everyone else think). Pay is an indicator of the outside view. In an altruistic market how good an indicator it is depends on how much you trust a few big grantmakers to be making good decisions.
Ok, yes, but I think it’s a little more complicated than that, or we would all be working at Goldman or Google who also able to deploy altruistic narratives.
Yes, the scope is “Orgs whose donors you respect for their capital allocation.” Goldman doesn’t have donors at all.
Yes you’re right (Goldman was bad/silly to bring up).
But it seems good to make the main point:
It’s possible and even ideal for salary to reflect impact.
However, people have used outside salaries to explain differential salaries. These justifications are extremely convincing (even if it is self serving write this).
(I don’t think you did this) but with the above justification, suggesting these norms are signals of impact risks leaning too hard on them. This might come off as slippery or wrong in certain situations.
[Edit: The original version of this comment offered an idea that, as Mauricio flagged below, could be inconsistent with U.S. antitrust law. Thanks, Mauricio, for flagging my mistake. I retract the comment.]
Hm, this might violate US antitrust law?
I wonder whether the exception for organized labor might apply in this context?
Conspiring to suppress wages is clearly off-limits. But because the intention is to raise wages to a uniform base that makes all high-impact work similarly attractive, rather than to suppress wages, I’d be interested to explore whether workers could pursue the strategy above by forming a union and bargaining collectively with employers for a consistent contract.
(I feel very uncertain of the feasibility of this idea—Before pursuing this idea any further, I think it would be important learn more about constraints on collective bargaining with multiple employers for similar contracts, as well as any limits on funders’ ability to encourage grantees to hire members of a union.)
Maybe, does this apply to non-profits?
Thank you very much for this post. I thought it was well-written and that the topic may be important, especially when it comes to epistemics.
I want to echo the comments that cost-effectiveness should still be considered. I have noticed people (especially Bay Area longtermists) acting like almost anything that saves time or is at all connected to longtermism is a good use of money. As a result, money gets wasted because cheaper ways of creating the same impact are missed. For example, one time an EA offered to pay $140 of EA money (I think) for me for two long Uber rides so that we could meet up, since there wasn’t a fast public transport link. The conversation turned out to be a 30-minute data-gathering task with set questions that worked fine when we did it on Zoom instead.
Something can have a very high value but a low price. I would pay a lot for potable liquid if I had to, but thanks to tap water that’s not required, so I would be foolish to do so. In the example above, even if the value of the data were $140, the price of getting it was lower than that. After taking into account the value of time spent finding cheaper alternatives, EAs should capture the surplus whenever possible.
As a default, I would like to see people doing a quick internal back-of-the-envelope calculation and scan for cheaper alternatives, which could take a minute or five. Not only do I think this is cost-effective; I think it helps with any issues of optics and alienation as well, because you only do crazy-expensive-looking things when there’s not an obvious cheaper alternative.
It would also be nice to have a megathread of cheaper alternatives to common expenditures.
I’m worried that in some cases it might be the case that grant makers and grant receivers are friends who actively socialize with each other, and that might corrupt the grantmaking process.
Being friends with someone is also a great way of learning about their capabilities, motivations and reliability, so I think it could be rational for rich funders to be giving grants to their friends moreso than strangers.
I disagree with you here. I think bring friends with someone makes you quite likely to overestimate their capabilities / reliability etc. If there’s psychology research available on how we evaluate people we know vs strangers, I’d love to read it.
There’s two opposing arguments: 1) You get more information about your friends than you get about strangers, and 2) you are more likely to be biased in favour of your friends.
Personally, I think it would be very hard to vet potential funding prospects over just having a few talks, and the fact that I’ve “vetted” my friends over several years is a wealth of information that I would be foolish to ignore.
Our intuitions on this may diverge based on how likely we think it is that we’ve acquired exceptional friends. If you’re imagining childhood friends or college buddies, then I see why you would be skeptical. If on the other hand you’re imagining the friends you’ve acquired from activities that you think only exceptional people would engage in, then that changes things.
I think for funding a project, most of the important and relevant information about a person who might run the project can be obtained from a detailed CV.
I thin most of the information that a funder could obtain about a friend which they couldn’t also get from the friend’s CV is their impression of difficult-to-accurately-evaluate things like personality traits. I place very little value on a funder’s evaluation of these things because these things are inherently difficult to evaluate anyway and I expect their evaluation to be too heavily biased by their liking for their friend.
Perhaps we disagree on the difficulty of evaluating personality traits, but I think we probably disagree on the extent to which liking someone as a friend is likely to bias your views on them.
My view has long been that the bias is likely to be so large that funding applications should include CVs but not the names of people. I think many EAs feel like systems like these overvalue credentials, but that could easily be gotten round by excluding university names and focusing CVs more on ‘track record of running cool projects’.
Wait what? Predictive validity of CVs is minimal for most jobs, one might naively guess that they ought to be even less predicative for funding entrepreneurial projects than for jobs.
Why do you think companies rely on referrals more than on CVs?
There are lots of ways to accurately predict a job applicant’s future success. See the meta-analysis linked below, which finds general mental ability tests, work trials, and structured interviews all to be more predictive of future overall job performance than unstructured interviews, peer ratings, or reference checks.
I’m not a grantmaker and there are certainly benefits to informal networking-based grants, but on the whole I wish EA grantmaking relied less on social connections to grantmakers and more on these kinds of objective evaluations.
Meta-analysis (>6000 citations): https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.172.1733&rep=rep1&type=pdf
Do you mean like people referring potential employees to them or the references of job applicants? I wasn’t aware of what companies rely on more in recruitment.
But yeah my model of recruitment was very evidence free and mostly based on my limited experiences in recruiting people for things, turns out my model of what’s most useful was basically the opposite of what some of the evidence says (https://www.talentlens.com/content/dam/school/global/Global-Talentlens/uk/AboutUs/Whitepapers/White-Paper-The-Science-Behind-Predicting-Job-Performance-at-Recruitment.pdf), I was very surprised to see Peer Ratings being so useful for predicting performance.
I meant people who work in a company referring future potential employees.
Very belated edit to add: if your workplace has a bunch of real work, someone needs to get things done on your team. If your teammates are slackers or generally incompetent, this means you need to pick up the slack. This could end up being pretty rough, so there’s a strong incentive to give honest referrals to future colleagues about their work performance (and not just charisma).
This is probably a very important distinction for those reading the above comments for the first time. “Referral” might be a better word so as to distinguish from “reference” letters written by past supervisors.
I’m not sure if this perspective is helpful but this issue reminds me of a somewhat analogous situation in the Financial Independence Retire Early (FIRE) movement. Originally the focus was on drastically limiting spending, increasing the savings rate to as high as possible, and retiring shockingly young. Then, as time passed some people realized they didn’t want to live in such austerity. Other people found that they could move things along faster by focusing on earning more, instead of spending less. Then there were people who didn’t really want to retire but more like get enough income to be comfortable and then downshift their lifestyles. There were folks who just focused on making as much money as possible and remained in the community even though they were just about getting rich. Then some people sort of stumbled into the movement having made a ton of money on cryptocurrency or Tesla options or whatever...they never really applied any of the principles but still retired early.
With all these changes in the demographics and mindsets of the community I’ve noticed that the subjects discussed and the behavior encouraged has notably changed over the years. It does not look much like what I saw 15 years ago.
Part of the change I’ve seen is that people with different flavors in mind self-select to associate with others that are similar. /r/leanfire separates from /r/fatfire etc. I’m guessing that drift and fragmentation like this are very likely for any group/movement that gets big enough. I don’t know if it is a good or bad thing.
I think what you’re describing is drift due to size (well in this case of FIRE, it actually might be drift due to experiences/values/maturity but let’s say due to size). The FIRE movement is “wide”. Maybe more appropriately, a subreddit like r/antiwork or r/superstonk is “wide”.
These “wide” movements have a lot of people. They often have momentum and can coordinate. But it’s unclear what resources and actions they can take, beyond buying stock or something. Also, as you point out, they have the tendency to drift or break apart.
But EA can do something else, which is getting “tall”. $50B of funding is just the beginning and this money is the least interesting resource EA has. EA can accumulate other things of great value. I think it’s hard to write out exactly what these resources are (because it’s hard to know in advance or because I’m dumb) but they are probably related to institutions and talent. One example would be a powerful applied math group that solves ELK.
One implication is that this “tallness” requires and justifies a strong, virtuous
Leviathan“Center” that shepherds and adds to these resources.One role of the “Center” is to prevent systemic misuse, which doesn’t look like stealing, but people being inside EA at some point, then leaving, taking away resources and not giving back (with the caveat that the departed resources could be used impactfully). The “Center” also needs to deal with other systemic issues, like entrenchment of individual/entities or uncollegial subcommunities.
These resources also create dynamics that both prevent or increase drift/dilution. For example, these valuable EA resources aren’t just money, they are sticky to EA. So a robust trend is people being attracted by them and trying to learn EA values. This is good, but one issue is that you need to figure out this flow and integrate new leaders and people successfully.
Another important dynamic might be the constant upgrading of talent. Ideally, EA talent should get better and better. Or at least be more experienced and have greater faculty. This creates a tension between existing cultures/groups and the flow of talent.
Each of these dynamics puts pressure on and implies different roles for the “Center”. For example, upgrading of talent means there’s pressure for the “Center” to focus on virtue and governance and fostering people, instead of object level work, or even strategy to some degree.
But the point of this story is that purposeless drift and dilution aren’t inevitable, and in fact are controllable by a good Center. The point of this control is the “tallness” or resources to execute effective altruism.
A few thoughts on how we could mitigate some of these risks:
Have generous reimbursement policies at EA orgs but don’t pay exorbitant salaries.
I think most EAs should value their time higher and be willing to trade money for time, and in these cases, I think you can justify a business expense. I think this will help clarify which spending choices are meant to actually boost productivity and which are just for fun. To be clear, I think spending some fraction of your income on just “fun” things like vacations, concerts, and eating out is fine in moderation. But to me at least, the shallow pond thought experiment is still basically true and there is plenty of need left in the world, even with the current funding situation.
I think we systematically overestimate how much spending more on personal consumption will make us happy/productive. I know plenty of people in finance/consulting/tech who have convinced themselves that they “need” to spend hundreds of thousands on personal consumption every year. I’ve lived in NYC on <$50K after taxes and donating for 4 years and feel like I’ve been able to do basically everything I want to do.
Emphasize costly signals of altruism.
We should encourage people to take the GWWC pledge and go vegetarian/vegan because they’re probably good things to do on their merits and because they signal a commitment to making a sacrifice to help others.
Strong upvoted because of the clear distinction between productivity/business expenses and spending money for fun/personal consumption.
Consider the analogy with food production and food waste in relation to global hunger. We can grow enough food to feed the planet. Our ability to solve world hunger is not constrained by food production, but, in my understanding, by logistical issues involving waste, transportation, warfare, and governance problems.
Likewise, in EA, our ability to address the problems with which we are concerned may be increasingly unconstrained by funding. Instead, it’s bottlenecked by similar logistics problems: waste, governance, coordination within and between organizations, the challenges of vetting grants, finding talent, building new organizations, and, as you are pointing out, optics. Can’t blame lack of funding for your failures when you’re no longer bottlenecked by funding!
It’s important to understand that these optics and logistical problems are not a fluke, or the consequence of something we did wrong, but a natural consequence of growing to a certain size. It’s just the next set of problems for us to solve.
Going forward, I would advocate for basing perceptions issues on legible evidence. I have no problem with this post, which does a good job of furthering a meaningful conversation. I notice, however, that it’s full of uncited and possibly exaggerated opinion aggregation and andecdata:
“The influx of EA funding is brilliant news, but it has also left many EAs feeling uncomfortable.” is representing two blog posts, plus the mixed reactions in the comments, as the unified opinion of “many” EAs.
“I’ve heard critical comments about a range of spending decisions. Several people asked me whether it was really a good use of EA money to pay for my transatlantic flights for EAG. Others challenged whether EAs seriously claim that the most effective way to spend money is to send privileged university students to an AirBNB for the weekend. And that’s before they hear about the Bahamas visitor programme…”
“Anecdotally, I’ve spoken to several organisers who aren’t convinced of longtermism but default to following the money nevertheless. I’ve even heard (joking?) conversations about whether it’s worth ‘pretending’ to be EA for the free trip. ”
If this is the best we’ve got, then so be it—anecdata > no data! Not a criticism of you or your post. But I think it would be valuable to run a more careful formal survey to understand what insiders, newcomers, leadership, allies, and people outside the movement think.
More worrying about optics at this level of the evidential pyramid seems to me to risk creating an optics issue.
One way it can create an optics issue is by selectively amplifying a few casual comments on one side of an issue into a perceived social consensus.
Another way is by putting effort into identifying ways that bad-faith criticism could make more damaging, meritless attacks on EA organizations and programs, an infohazard of sorts.
A third way is by making newcomers in EA, who are disproportionately the sort of people who lack evidence of having personally done something effectively altruistic, doubt themselves and feel guilty for ways they’ve indulged themselves in conferences and free food, or thought about applying for jobs and grants.
I would welcome more focused attention on this issue, but I think that there’s a burden of epistemic rigor that falls on an analysis of optics issues facing the EA movement.
With the caveat that this is obviously flawed data because the sample is “people who came to an all-expenses-paid retreat,” I think it’s useful to provide some actual data Harvard EA collected at our spring retreat. I was slightly concerned that the spending would rub people the wrong way, so I included as one of our anonymous feedback questions, “How much did the spending of money at this retreat make you feel uncomfortable [on a scale of 1 to 10]?” All 18 survey answerers provided an answer. Mean: 3.1. Median: 3. Mode: 1. High: 9.
I think it’s also worth noting that in response to the first question, “What did you think of the retreat overall?”, nobody mentioned money, including the person who answered 9 (who said “Excellent arrangements, well thought out, meticulous planning”). On the question “Imagine you’re on the team planning the next retreat, and it’s the first meeting. Fill in the blank: “One thing I think we could improve from the last retreat is ____”,” nobody volunteered spending less money; several suggestions involved adding things that would cost more money, including the person who answered 9, who suggested adding daily rapid tests. The question “Did participating in this retreat make you feel more or less like you want to be part of the EA community?” received mean 8.3, median 9, including a 9 from the person who felt most uncomfortable about the spending.
I concluded from this survey that, again, with the caveats for selection bias, the spending was not alienating people at the retreat, and especially not alienating enough to significantly affect their engagement with EA.
apologies if this was obvious from the responses in some other way, but did you consider that the person who gave a 9 might have had the scale backwards, i.e. been thinking of 1 as the maximally uncomfortable score?
Hmm, this does seem possible and maybe more than 50% likely. Reasons to think it might not be the case is that I know this person was fairly new to EA, not a longtermist, and somebody asked a clarifying question about this question that I think I answered in a clarifying way, but may not have clarified the direction of the scale. I don’t know!
Acknowledging that important caveat, I am very pleased to have this counterbalancing data available. I hope that we can continue to gather more of it and get a better sense of how the EA movement and its social surroundings think about these questions over time. Thank you for collecting it.
Thanks for writing this post, this is an area I’ve also sometimes felt concerned about so it’s great to see some serious discussion.
A related point that I haven’t seen called out explicitly is that monetary costs are often correlated with other more significant, but less visible, costs such as staff time. While I think the substantial longtermist funding overhang really does mean we should spend more money, I think it’s still very important that we scrutinize where that money is being spent. One example that I’ve seen crop up a few time is retreats or other events being organized at very short notice (e.g. less than two weeks). In most of these cases there’s not been a clear reason why it needs to happen right now, and can’t wait a month or so. There’s a monetary cost to doing things last minute (e.g. more expensive flights and hotel rooms) but the biggest cost is the event will be less effective than if the organizers and attendees had more time to plan for it.
More generally I’m concerned that too much funding can have a detrimental effect on organisational culture. It’s often possible to make a problem temporarily go away just by throwing money at it. Sometimes that’s the right call (focus on core competencies), but sometimes it’s better off fixing the structural problem, before an organisation scales and it gets baked in. Anecdotally it seems like many of the world’s most successful companies do try to make frugality part of their culture, e.g. it’s one of Amazon’s leadership principles.
In general, being inefficient at a small scale can still end up being very impactful if you work on the right problem. But I think to make a serious dent on the world’s problems, we’re likely going to need some mega-projects, spending billions of dollars with large headcount. Inefficiency at that scale is likely to result in project failure: oversight and incentives only get harder. So it seems critical that we continue to develop the ability in EA to execute on projects efficiently, even if in the short-term we might achieve more by neglecting that.
I do feel a bit confused about what to do in practice to address these problems, and would love to see more thinking on it. For individual decisions, I’ve found figuring out what my time (in some context) is worth and sticking to it for time-money tradeoffs is helpful. In general I’d be suspicious if someone is always choosing to spend money when it saves time, or vice-versa. For funding decisions, these concerns are one of the reasons I lean towards keeping the bar for funding relatively high even if that means we can’t immediately deploy funding. I also support vetting people carefully to avoid incentivizing people pretending to be longtermists (or just having very bad epistemics).
Google, by contrast, is notoriously the opposite—for example emphasizing just trying lots of crazy, big, ambitious, expensive bets (e.g. their “10x” philosophy). Also see how Google talked about frugality in 2011.
Making bets on new ambitious projects doesn’t seem necessarily at odds with frugality: you can still execute on them in a lean way, some things just really do take a big CapEx. Granted whether Google or any major tech company really does this is debatable, but I do think they tend to at least try to instill it, even if there is some inefficiency e.g. due to principal-agent problems.
My thoughts on this:
I think because of the flow of money into EA, it feels like some people have updated towards cost-effectiveness and counterfactual reasoning being less important than before.
I disagree with that view—I think that cost-effectiveness and counterfactual reasoning are exactly as important as they were before, the only change should be that our cost-effectiveness bar for funding things should be slightly lower. It remains true that small amounts of money from people in rich countries can dramatically raise someone’s income via GiveDirectly, save a life through AMF or improve someone’s subjective wellbeing via StrongMinds, so the obligation to be cost-effective remains very strong.
I think not enough effort seems to go into estimating and optimising the cost-effectiveness of the community building side of EA, probably in part because this is difficult to do, highly uncertain and thus prone to motivated reasoning.
But I think much more effort should go into estimating and optimising the cost-effectiveness of EA community building anyway. Some concrete examples I’d like to see—do we think we overspent / underspent on EAGxs and EAGs this year?
This post clearly articulates a lot of the related thoughts I’ve been having and discussing with other organizers; well done. I will add my quickly dashed off thoughts, coming in particular from the perspective of a EA group organizer:
1. The time/ money trade off is real, particularly for mostly volunteer-led groups where volunteer capacity is our main bottleneck. Nonetheless, in my view being cognizant of trade offs when allocating resources is core to EA, and it is a real loss when we just vaguely gesture at the time/money trade off and spend money without really thinking deeply about its best use. I advocate taking a rule utilitarian approach to this—even if in any given situation it might be more time that it is “worth” to really think hard about whether spending funds on something is the best use of those funds—even within a more narrow framework like a group’s overall goals—it is still worth doing as a rule. This also reinforces the norms of talking explicitly about trade offs, cause prioritization, and thinking strategically.
2. This is anecdotal of course, but I have directly seen people express discomfort when our group spends money on, e.g., paying for food and drinks or renting a “nice” space for an event. Attendees have directly said in feedback following these events that they are uncomfortable spending this money, and it doesn’t seem to align with our EA values. Is this discomfort enough to have tangible consequences like they stop being engaged? Unclear. Moreover, I think it is quite possible that having a “nicer” event may still be justified by attracting more people, more types of people, and/or having the people there have a better time. I’ve advocated collecting data about, for instance, how many people come to similar events, one that is catered and one that is not. Such data collection of course also takes time and energy. In general I am pretty skeptical that the marginal dollar spent on food and particularly alcohol at an event is doing much good but without any data this view is very loosely held.
3. FWIW, Matt Yglesias has expressed something in this vein on the 80k podcast:
Because of Evan’s comment, I think that the signaling consideration here is another example of the following pattern:
Someone suggests we stop (or limit) doing X because of what we might signal by doing X, even though we think X is correct. But this person is somewhat blind to the negative signaling effects of not living up to our own stated ideals (i.e. having integrity). It turns out that some more rationalist-type people report that they would be put off by this lack of honesty and integrity (speculation: perhaps because these types have an automatic norm of honesty).
The other primary example of this I can think of is with veganism and the signaling benefits (and usually unrecongnized costs).
A solution is that when you find yourself saying “X will put off audience Y” to ask yourself “but what audience does X help attract, and who is put off by my alternative to X?”
Warren Buffett called his private jet ‘The Indefensible’ — then renamed it ‘The Indispensable’ after realizing it was worth the money.
Source
Following the academic research closely as EAs often do produces many perspectives that are surprising to traditional activists. I’m a student at University of California Davis. Here my frugality is essential to getting my peers to take my perspectives on effectiveness seriously. If it wasn’t for the frugality, they would dismiss me as not altruistic because I’m a moderate democrat instead of a socialist. I’m frugal because I believe it’s the right thing to do (for me at least), not because of the optics. I don’t know what the best answer is overall, but believe we should be particularly cautious about abandoning frugality in very left wing environments. Perhaps very different levels of frugality will be best in different communities.
Even before a cost-benefit analysis, I’d like to see an ordinal ranking of priorities. For organizations like the CEA, what would they do with a 20% budget increase? What would they cut if they had to reduce their budget by 20%? Same thing for specific events, like EAGs. For a student campus club, what would they do with $500 in funding? $2,000? $10,000? I think this type of analysis would be helpful for determining if some of the spending that appears more frivolous is actually the least important.
My suggestion would be that more people interested in Effective Altruism infrastructure donate to Giving What We Can instead of the E.A. Infrastructure Fund or CEA Community Building Fund. A community organized around effective giving is 1) better for optics; 2) better for us; 3) anecdotally, I was inducted into E.A. through global poverty, and then later got into longtermism and animal welfare by extension. Without good infrastructure and a strong culture of effective giving, E.A. will cease to be an excited and exciting (and growing) community working to solve the world’s biggest problems, and will become simply a few eccentric billionaires weird AI risk pet project.
FWIW, I think it’d be pretty hard (practically and emotionally) to fake a project plan that EA funders would be willing to throw money at. So my prior is that cheating is rare and an acceptable cost to being a high-risk funder. EA is not about minimising crime, it’s about maximising impact, and before we crack down on funding we should check our motivations. I don’t want anyone to change their high-risk strategy based on hearsay, but I do want our top funders to be on the lookout so that they might catch a possible problem before it becomes rampant.
I like the culture-aligning suggestions for other reasons, though. I think the long-term future will benefit from the EA community remaining aligned with actually caring about people.
With Asana’s stock down 82% in the past six months, Meta down 43%, and SBF’s net worth cut in half in the past month, maybe the bigger worry should be a period of austerity and cutbacks?
I’m not sure if there’s any data on this, but I think EAs do actually tend to come from well-off backgrounds.
Because of that, I think a share (I’d guess like 15%?) of EA funding for career building for students and recent graduates doesn’t actually have counterfactual impact and just provides funding for people to do stuff which they would have spent their own money on anyway. More money in EA will mean more money being used in this way.
Obviously, this wasted money is bad, because it’s still important for us to be cost-effective and the counterfactual use is still AMF.
So I think we’d benefit from a strong norm against using EA funding for career building activities which people would have spent their own money on anyway.
I don’t think we should retire the “do you think this would be a better use of money than giving it to AMF?” type thinking, we should keep it alongside “actually, yes, flow through effects could mean that this is a better use of money than giving it to AMF”.
There’s also probably a case for experimenting with means-testing for grants, which a lot of social initiatives use to focus their money on people who need it the most, which improves counterfactual cost-effectiveness.
Current (highly engaged) EAs mostly coming from well-off backgrounds can also be a good argument in favor of more funding for career building for students and recent graduates though.
EAs from less-affluent backgrounds are those who benefit the most from career building and exploration funding, as they are the people most likely to face financial/other kinds of bottlenecks that prevent them from doing impactful stuff.
Reducing career building funding will just reinforce the trend of only well-off EAs that can afford taking risks staying engaged, while EAs from less affluent backgrounds being more likely to drift out of the community/less likely to take riskier but more impactful career paths.
As you say, the solution would be to effectively assess whether career building has counterfactual impact and ideally even fine-tuning the funding amount to specific circumstances, although that probably could lead to the development of weird and undesirable incentives on the applicants’ side.
Yes I agree with you with regards to amount of funding—one EA initiative I’d actually like to see is funding EA students from LMICs to go to the world’s best universities.
And yes, my idea is more about fine-tuning the funding to go to people where the counterfactual impact is higher (another plus would be that less EA money is used up by wealthier people, freeing it up for less wealthy people).
I think means-testing is fairly widely used (at least in the UK). I use it myself to selectively distribute products from my social enterprise towards kids from lower income backgrounds. I’m fairly confident that the downsides of means-testing - weird incentives, people trying to “game” the system and the indignity it makes some people feel, generally don’t outweigh the benefits of the better targeting of funding. And in the EA context, I think the benefits of better targeting funding will be larger than usual because of the cost effectiveness with which the saved EA money will be spent.
This is a great post, and I’m glad these points are being raised. I share a lot of the same concerns (basically, what happens to EA long term when it’s just a good deal to join it?).
A big and small personal win from these changes in funding:
I decided to launch a magazine reporting on what matters in the long-term in large part because of the change in funding situation and related calls for more ambition. I had the idea for doing this more than 3 years ago, but didn’t pursue it. (We’re aiming to launch in Mar 2023).
In August, I quit my job at GiveDirectly to pursue freelance journalism full time, and planned to make basically no money for possibly 1-2 years. I cut a lot of costs to maximize my runway. A few months later, I got a job with an EA org that paid better than any job I had in the past. Now my time was scarce and money was not. I bought a free-standing dishwasher for ~$1000, which bought back ~45 minutes a day. I think this decision, and other smaller ones like it, were very good.
But it’s easy to get into self-serving territory where you value your time so highly that you can justify almost any expense (or don’t think of cheaper ways to meet the same goals). This can also move us into territory where, to do ostensibly altruistic work, we don’t give anything up, and, in fact, argue that others should give things to us.
This feels fundamentally different from the movement that attracted me 5 years ago (though the reasoning is very consistent, and may well be right).
Like others, I really appreciate these thoughts, and it resonates with me quite a lot. At this point, I think the biggest potential failure mode for EA is too much drift in this direction. I think the “EA needs megaprojects” thing has generated a view that the more we spend, the better, which we need to temper. Given all the resources, there’s a good chance EA is around for a while and quite large and powerful. We need to make sure we put these tools to good use and retain the right values.
It’s interesting here how far this is from the original version of EA and its criticisms; e.g. that EA was an unrealistic standard that involved sacrificing one’s identity and sense of companionship for an ascetic universalism.
I think the old perception is likely still more common, but it’s probably a matter of time (which means there’s likely still time to change it). And I think you described the tensions brilliantly.
Congrats on having the most upvoted EA Forum post of all time!
Free food and free conferences are things that are somewhat standard among various non-EA university groups. It’s easy to object to whether they’re an effective use of money, but I don’t think they’re excessive except under the EA lens of maximizing cost-effectiveness. I think if we reframe EA universities groups as being about empowering students to tackle pressing global issues through their careers, and avoid mentioning effective donations and free food in the same breath, then it’s less confusing why there is free stuff being offered. (Besides apparently being more appealing to students, I also genuinely think high-impact careers should be the focus of EA university groups.)
I’m in favor of making EA events and accommodation feel less fancy.
There are other expenses that I’d be more concerned about from an optics perspective about than free food and conferences.
It’s worth noting that these perks are available for new EA groups in general, not even particularly longtermist EA groups. That said, I think there are plenty of additional perks to being a longtermist (career advising from 80,000 Hours, grants from the Long-Term Future Fund or the FTX Future Fund to work on projects, etc.) that you might want to be one even if you’re intellectually unsure about it. I think another incentive pushing university organizers in favor of a longtermist direction is: it doesn’t make sense to be spending this much money on free food and conferences from a neartermist perspective, at least in my opinion.
Maybe I missed this in a previous comment (or even the text itself, I just ctrl+f’ed it after skimming it) but one thing I think it could be worth spending more on is better working conditions (I think several EA orgs already to this well, but I would be surprised if there are no “laggards”). Think staffing projects properly so there is no burn-out, paid parental leave for both parents, childcare facilities near bigger offices, properly paid internships, etc. Burn-out plagues the “making the world better” industry and I think we can attract a lot of talent who might be skeptical of expensive retreats, but who actually value much more being able to have good mental health and invest in their families. And do all this with a global hat on on so that in geographies with e.g. limited holidays, that it is increased to 3-6 weeks paid holidays each year no matter your seniority. Even here in Sweden we would benefit from well staffed projects/organizations (despite having decent holiday policies etc. by law).
A lot of good points here.
A few thoughts on the benefits of a frugal community:
norms of frugality can help people avoid some of the consumeristic rat race of broader society. I don’t want EAs caught up in “keeping up with the Jones’s.” I want EAs keeping up with good ideas and good actions.
I think we want a community where someone who uses careful reasoning to take an impactful role for $60K/yr feels just as welcome in EA as someone who uses careful reasoning to take an impactful role for $160k/yr.
Not sure if this is in any way a valid perspective of looking at it:
I wonder how the big spending looks in the perspective of a small donor. Say, a person with a median income within a rich country who gives a 1-10 percent of their salary away.
I used to “earn-to-give” with a after-tax salary of 11 euros/hour. That’s a lot compared to the global average! This was enough to donate >10 percent. But my past self’s hour worked could fund maybe a few minutes (?) of a researcher (I don’t know what EA researchers earn) - and it might have been worth it.
It makes me think of this comment. Again, not sure if it’s a valid point.
Thank you for writing this post; I know these take a lot of time and I think this was a really valuable contribution to the discourse/resonated strongly with me.
I find it helpful get clearer about who the audience is in any given circumstance, what they most want/value and how money might help/hurt in reaching them. When you have a lot of money, it’s tempting to use it as an incentive without noticing it’s not what your audience actually most values. (And creates the danger of attracting the audience that does most value money, which we obviously don’t want.)
For example, I think two critical audiences include both ‘talented people who really want to do good’ and ‘talented people who mostly just like solving hard problems.’ We’re competing with different kinds of entities/institutions in each case, but I think money is rarely the operative thing for the individuals I’d most like to see involved in EA.
For e.g. young people who really want to do good:
Many of them already know they could go work in finance/consulting and get lots of money and travel and perks, but choose not to. I was in this situation, and all the free stuff made me assume that there was nothing intrinsically motivating/valuable about the work, since they had to dangle so many extrinsic rewards to get me to do it. EA is competing for those people with other social movements/nonprofits, and I suspect the more EA starts looking like the finance career option in terms of extrinsic rewards, the more those people might end up dismissing it/feel like they’re being “bought.”
For people who really like solving hard problems:
I’m thinking here of people I know who are extremely smart and have chosen to do ethically neutral to dubious work because they got nerdswiped/enjoyed the magnitude of the challenge/felt like it was a good fit for their brain/etc.
My sense is that money was a factor for them but more as a neutral indicator of value/signal that they were working on something hard and valuable than because of a desire to live a specific lifestyle (many still live with roommates/don’t spend much). I think the best way to get these people is emphasizing that we have even harder and more interesting problems they could be solving (though offering reasonably comparable salaries so that the choice to switch is less disruptive also seems good).
I think the point has been made in a few places that more money means lower barrier to entry and is an opportunity to reduce elitism in EA and I just wanted to add some nuance:
I think deploying money to literally make participation in the movement possible for more people is great (i.e. offering good salaries/healthcare/scholarships to people who would be barred from an event by finances).
On the other hand, I think excessive perks/fancy events etc. are likely to be especially alienating for people who have close family members struggling financially (this aligns with my own experience), so I worry that spending of this kind may actually make the movement feel less welcoming to people from a different socioeconomic background instead of more.
You point out it’s difficult to control for “unilateralism”. There isn’t just one major funder but several, and each has many different areas and projects.
One thing that is more manageable and visible are “institutions” and culture around leadership:
I think there is a genuine culture of good leadership (“servant leadership”?) in older and more established EA institutions/funders
A lot of people right now in leadership and younger leader positions, seem to have given up higher income opportunities to be where they are
A lot of people are selected not just because they are smart, but because it was noticed they dutifully work on lower status tasks—there isn’t a lot of appetite to grab people gunning for CEO titles
I guess the point here is pretty basic. As long as culture flow from the top, to middle is good, things will hold together well and work effectively.
I really think this is true. It’s also valuable because it’s a relatively simple message.
Great post! This resonates a lot with me, and I’m happy the post has gotten a fair bit of attention. Anecdotally, this has increasingly become the part of EA I feel I have to answer for the most to outsiders these days.
A slightly related idea I’ve seen some success with — both in EA and elsewhere — is what I’ve come to think of as the reverse free lunch effect: When people get something fancy or expensive for free they tend to become aware they are being intized to be there. After all, there is no such thing as a free lunch and there might be an implication of getting something back. The interaction can end up more transactional in nature. Conversely, if they get the frugal treatment you’re signaling they are there because they’re part of the ingroup. There is no facade, no
freefancy lunch. They are there because they want to be a part of whatever you are doing together. This paradoxically often creates a much more exclusive-feeling experience, and therefore also a deeper connection to whatever you are there to do. Obviously the issue might be getting people in the door in the first place, so this might be more of an advanced technique.Maybe this is just a cultural effect from dugnad country, but thought it was worth sharing.
Thank you so much for this post. It eloquently captures concerns that I’ve increasingly heard from group members (e.g., I know a fairly-aligned member who wondered whether a retreat we were running was a “waste of CEA’s money”). While I agree that the funding situation is a boon to the movement, I also agree that we should carefully consider its impact on optics/epistemics. I also think all your suggestions sound reasonable and I’d be really excited to see, for example,
a ‘go-to’ justification (ideally including a BOTEC) for spending money on events
more M&E for meta-EA funding, particularly spending from group organizers (and I say this despite it very much being against my self-interest, because I think this would substantially increase the effort of getting funding. So, I guess I’d really appreciate if an existing meta-EA funder looked into creating infrastructure for this)
a nuanced explanation of EA’s funding situation
I wonder if it might be possible to get volunteers to help find some of opportunities to save money, in the genre of
I am not confident that this is true, because coordinating with volunteers is a lot of work and coordination-time is limited, but I could imagine a world where you could be like “here is my BATNA for booking flights for these speakers, if someone can improve upon this in the next 12 hours, I will donate the difference in money to the charity of their choice”.
You could outsource this to someone who saves more per hour worked would save more than the total cost of their time.
Easier said than done!
True! But I think a good meta ops org could provide this kind of service for the community
I agree that this will be a good thing for a meta ops org to do, and I’d be excited to see a meta ops org! I suspect there might be even more valuable things that a meta ops org can do however (e.g. handle the legal and financial aspects of many orgs).
I suspect that this will be more of an issue for the global poverty part of the movement and less of an issue for the long-termist component of the movement.
Why do you think it’s less important for the x-risk/longtermism parts of the EA movement to have good PR and epistemics?
FWIW, Chris didn’t say what you seem to be claiming he said
It’s easier to justify for longtermism as the comparative in people’s minds is less likely to be people starving in Africa. And it’s less likely to come off as hypocritical. So the PR risk is more manageable.
Epistemics is a risk though.
Maybe I’m misunderstanding this but I disagree. I think the average person thinks spending tons of money on global health poverty is good, particularly because it has concrete, visible outcomes that show whether or not the work is worthwhile (and these quick feedback loops mean the money can usually be spent on projects we have stronger confidence in).
But I think that spending lots of money on people who might have a .000001% chance of saving the world (in ways that are often seen as absurd to the average person) is pretty bad optics. A lot non-EAs don’t think we can realistically make traction on existential risk because they haven’t seen any evidence of traction. Plus, longtermists/x-risk people can come across as having an unfounded sense of grandiosity—because there are a whole bunch of people out there who think their various projects will drastically transform the world, and most people won’t assume that the longtermist approach is the only one that’ll actually work.
Sorry, I think you might have actually misunderstood my point. I was talking about spending money on people working on global poverty vs. people working on longtermism, rather than spending money on global poverty vs longtermism.
My point is that if you invest a lot of money in people working on global poverty, the question that arises is why aren’t you spending it on global poverty, while it’s hard to spend money on longtermism without spending it on people. In any case, people are more accepting of ai researchers bring paid large sums.
That makes sense though I feel like this still applies. It’s still not great optics to pay lots of money to people working on global poverty, but it’s far from unheard of and, if there’s concrete evidence that those people are having an impact then I think a lot of people would consider it justified.
I think the reason it’s acceptable for AI researchers to bring in large sums of money is more because of the market rate for their skillset and less because of the cause directly. I think if someone were paid a high salary to build complex software that solved poverty (if such a thing existed) I would guess that that would be viewed roughly equally. On the other hand if you pay longtermist and/or global poverty community-builders lots of money, this looks much worse.
Maybe I can help Chris explain his point here, because I came to the comments to say something similar.
The way I see it, neartermists and longtermists are doing different calculations and so value money and optics differently.
Neartermists are right to be worried about spending money on things that aren’t clearly impacting measures of global health, animal welfare, etc. because they could in theory take that money and funnel it directly into work on that stuff, even if it had low marginal returns. They should probably feel bad if they wasted money on a big party because that big party could have saved some kids from dying.
Longtermists are right to not be too worried about spending money. There’s astronomical amounts of value at stake, so even millions or billions of dollars wasted doesn’t matter if it ended up saving humanity from extinction. There might be nearterm reasons related to the funding pipeline they should care (so optics), but long term it doesn’t matter. Thus, longtermists will want to be more free with money in the hopes of, for example, hitting on something that solves AI alignment.
That both these things try to exist under EA causes tension, since the different ways of valuing outcomes result in different recommended behaviors.
This is probably the best case for splitting EA in two: PR problems for one half stop the other half from executing.
I don’t have anything smart or worthwhile to comment, but I want to say that I am glad you wrote this.
I’m quite uncomfortable with the idea that the best use of money is to give it to inexperienced young people from wealthy families who went to expensive schools. Helping privileged people get access to more privileged doesn’t rank high on my personal list of cause areas, and I’m glad that someone is speaking out against this trend.
I’m uncomfortable with this too, but more comfortable than I used to be.
Privileged people have a lot of power/leverage in the world. That leverage can be squandered, used for selfish means, or used for good.
If we think EAs have uniquely good ideas for identifying and solving neglected, pressing global problems, I want people with lots of leverage to learn from EA. The counterfactual is they use their leverage to do less altruistic or less effective things. I am willing to put money toward avoiding that.
Yes, you should be influenced by it, in proportion to the extent you give credence to their worldview and agree with their values.
Very strongly agree with you here. I also agree that the positives tend to outweigh the negatives, and I hope that this leads to more careful, but not less giving.
Thanks for writing this up!
This post does resonate with me, as when I was first introduced to EA, I was sceptical about the idea of “discussing the best ways to do good”. This was because I wanted to volunteer rather than just talk about doing good (this was before I realised how much more impact I could have with my career/donations) and I think I would’ve been even more deterred if I’d heard that donated funds were being spent on my dinners.
However, it sounds like my attitude might have been quite different to others, reading the comments here. Also, I suspect I would’ve ended up becoming involved in EA either way as long as I heard about the core ideas.
I think a giga-donation ($1B+) or two to GiveDirectly will go a long way to improving optics (and—let us not forget—millions of lives!). In general, extravagant spending should be matched with such donations.
There should be some “optimal” allocation of funding or best effort to find one.
If there are extravagances (wasteful high spending that is ex ante bad) we should reveal that here publicly and analyze and take actions so that it doesn’t happen again.
It doesn’t make sense to re allocate vast amounts of money to offset another bad act.
I strongly agree that one should focus on impact, not on offsetting. See Claire Zabel’s post against offsetting.
I’m not sure if offsetting is the right reference class. Maybe moral trade is more relevant? If we want broad support for—or at least to limit opposition to—EA/Longtermism, we should also do things with broad appeal (that are still highly effective in absolute terms—e.g. GiveDirectly).
The difficulty is in judging what is “wasteful”. To many outsiders, six-figure salaries for non-profit work will be judged to be “wasteful” or extravagant regardless of whether or not it actually is (from a counterfactual, all things considered, EA standpoint).
In terms of optics at least, present-day inequality is a big thing.
I think it’s kind of ironic this has been downvoted, given a similar point is made in the (most up voted post of all time) OP; and it’s a comment about optics. What are the downvoters’ thoughts on optics?
I didn’t downvote this.
I am guessing that the reasoning in the comment isn’t “impact focused”.
Probably one of the key ideas EA brings is the ability to focus large resources on highly effective, impactful activities or projects or institutions, which sometimes involves high salaries or other high spending.
This idea is criticized sometimes. But it often seems that these criticisms lack a vision/model/understanding of how highly effective people or projects operate and succeed.
Another way of seeing this is to look at ineffective non-profits. I think that it’s very unlikely that all the non-profits outside of EA are ineffective because everyone in them is dumb or unprincipled. Instead, it seems like people are caught in some sort of “Malthusian-like” trap.
They often internalize beliefs where they have low spending and spend their limited time on bad activities that ultimately look like appeasement, and attend to social/political beliefs that don’t go anywhere.
This situation drives out talent and prevents critical long term planning.
Right, I get that, but I’m talking about the perception of EA (optics) as viewed from the outside (as is OP). Looking at the meta-level: will EA’s impact be maximal if it is politically opposed? I’m playing devil’s advocate: it looks a bit suspicious if we conclude that the best way to have an impact is mostly to pay already privileged people high salaries. Especially given global inequality (hence the suggestion of GiveDirectly). Why not be in a strong position to counter this by saying we’re also taking significant steps to combat global poverty?
This post is excellent—thank you for writing and sharing. ❤️
Regarding this suggestion:
“Given the unilateralist’s curse, perhaps there should be some central forum for EA funders to coordinate / agree upon policies with an optics perspective in mind.”
I think this would be hugely helpful, and that such a forum should be open and accessible to the rest of the EA community. I agree that SBF and Dustin+Cari have made amazing strides and are funding generally awesome things, but there’s something unsettling about them being able to unilaterally move the needle so significantly. They hire staff and researchers, and I think that’s wonderful (since determining where to deploy money effectively is one of the hardest problems we face), but one proposal to move the community more in line with what you had suggested would be a donor voting system.
Imagine Open Phil has its team of dozens of researchers write up proposals that then get widely distributed among the EA community—some researchers advocating more spending in biorisk, others on public health, etc. - and then members of the EA community vote on which proposal they think would be most effective. OP’s yearly budget for giving could then be spent proportionally according to the votes that each proposal receives. This has the benefits of incorporating the wisdom of the crowd (enlisting the help of tens of thousands of intelligent, thoughtful EAs rather than on the few dozen OP researchers themselves), while also acting as a yearly referendum on the values of EA. Wouldn’t it be interesting to find out concretely how much money EA would dedicate to each cause area if we were all collectively voting on where to spend it?
It’s kind of like a reverse donor lottery—everyone pools their money, then you collectively determine where to spend it, knowing that your preferred cause area might not be the one that’s favored by others, but trusting that tens of thousands of EAs are smarter than one.
Love you all!
A core issue with “voting” is that it’s not hard to change the voting pool (this is a whole other side to the coin no one has stirred everyone up with a post about, because I guess it’s less visceral than being infiltrated by stealthy predators). The incentives to change the voting pool would be so vast, and the institutional demands to regulate it are so large and don’t exist, that the system will collapse almost immediately.
I agree that that’s a difficult issue. I also think that even if that could be solved, current decision-making processes lead to better decisions than this proposal would.
As a first poster here I note the posting advice to be kind. I try and perhaps I fail. I think that the article to which this is a comment is too kind to everyone involved.
There is a fundamental underlying moral and ethical problem here which has nothing specifically to do with EA. Can one do definite small harm for potential big benefit? Can one be a little evil to perhaps be more good?
There are numerous philosophers (and religions) who’ve thought deeply about this question and concluded you cannot do so. I think generally one should be most reluctant to do this trade off. I think much more often than not one’s good intention does not have a good outcome and if the good intention involves embracing anything questionable then it often happens that intention is not followed thru or the result is not good but you’re left with the consequences of the bad prerequisites to your action.
Consequentialism can be very bad. The idea that one can let someone suffer today to perhaps prevent two suffering in years to come is arrogant. How does one know one will be successful? How can one step over the sick person today? Yet I read in the latest accounts of the Centre for Effective Altruism UK charity that it withheld £7million+ in reserves justified in great part (read it for yourself) by the possibility of reputational damage or the possible loss of a donor! How could that happen? By a journalist reading and reporting this comment, perhaps. Instead that £7million should be spent on today’s problems, it shouldn’t be saved for free beer for (and the very high salaries of) “effective” “altruists” should the funding dry up because the donors come to think like me. https://register-of-charities.charitycommission.gov.uk/charity-search/-/charity-details/5026843
The road to hell is paved with good intentions.
I want to place the following on record so that temptation is removed from me and so that perhaps some others are similarly emboldened. I will not accept the 5* hotel and fine dining Bahamas week with business class flights and expenses to attend some supposedly “effective” “altruism” event. Neither will I accept the more local Bracknell 2* weekend with pizza and 2nd class rail refund.
(I found myself at a London event recently which had an EA component. I made sure to pay for my “free” beer.)
I’m unlikely to attend such event because (1) I will feel personally sullied by it all and despite that might out of politeness not speak out, and (2) you won’t invite me because maybe I might eschew politeness and speak out, and (3) there is much better use for the money.
If anyone accepts such a Bahamas bribe then they really are allowing themselves and others to smugly not be saving eyesight and callously not be buying malaria nets while they stuff themselves with caviar and snooze poolside with 500 thread Egyptian cotton towels. And that’s true for Bracknell burgers too, it’s just a question of degree.
I’m not saying one must wear sackcloth and eat drool. There really is a difference between giving oneself a treat, and doing so pretending one is doing good. When honesty with oneself is compromised everything else soon follows. When one is living comfortably enough accepting food and drink and accommodation from charitable funds is just wrong.
I don’t mean to be rude in the common sense, but I do mean to be rude in the sense of being startlingly abrupt. I am being judgemental. But the behaviour I criticise here evidently needs an emperor’s new clothes approach. Whether I’m pure enough to deliver it is a different matter. Doubtless this has been said already by others better than me, and better than I do.
(Comment in personal capacity only.)
I support people sharing their unfiltered reactions to issues, and think this is particularly valuable on a contentious topic like this one. Critical reactions are likely undersupplied, and so I especially value hearing about those.
However, I strong-downvoted your comment because I think it is apt to misleading readers by making statement that can be construed as descriptions of actual EA programs (such as the FTX EA Fellowships on the Bahamas) despite being substantively inaccurate. For instance:
I don’t think that “fine dining” is an appropriate description for the food options covered for the FTX EA Fellows. I would say it was fine but lower in both quality and ‘fanciness’ than food typically provided at, say, EA events in the UK. (Though it’s possible it was more expensive because many things in the Bahamas that are targeted at tourists are overpriced.)
I’m fairly sure that eating caviar is not a typical activity among EAs working from the Bahamas.
I doubt that traveling business class is common among EAs visiting the Bahamas. I didn’t. (I do think that in some cases the cost and increased climate impact of flying business class can be well justified by adding several counterfactual work hours.)
I don’t know what a 500-thread Egyptian cotton towel is, but the towels I saw in the Bahamas looked fairly normal to me.
As far as I can tell from googling, the hotel in which most FTX EA Fellows stayed was not a five-star hotel.
I’m aware that some of your statements might have been intended as satirical, but I think to readers the line between satire and implied factual claims will at the very least be ambiguous, which seems like a recipe for misinforming readers.
I also have no idea what you’re referring to when mentioning an “UK Effective Altruism charity that [...] withheld £7million+ in reserves”. I don’t know whether or not this is accurate, but I think it’s bad practice to make incriminating claims without providing information that is sufficiently specific for readers to be able to form their own views on the matter.
(I generally support thoughtful public discussion of the issue raised by the OP, and think it made several good points, though I don’t necessarily agree with everything.)
This may well be literally true, but it is unusually fancy still (I also tried Googling it and couldn’t figure out how many stars it has).
I think it depends on the baseline. If I compare it to staying in a hostel like I would do when backpacking or a trip with friends, then it was definitely fancy. If I compare it to the hotel that a mid-sized German consulting firm used for a recruiting event I attended about five years ago, then I would say it was overall less fancy (though it depends on the criteria – e.g., in the Bahamas I think the rooms were relatively big while everything else [food, ‘fanciness’ as opposed to size of the rooms, etc.] was less fancy).
Huh. My impression is that the hotel I stayed at in my Google offsite (c.2019) was overall less fancy. The rooms were similarly nice but my Google offsite had 2 people to a room, and worse views.
(My understanding is that there were logistics reasons that made it hard for FTX to host a large contingent of people in a place that’s less fancy, but more cheaply. And this isn’t a big priority for them compared to making and donating billions of dollars. And of course Google has built up way more ops capacity over the decades to save resources while still delivering a good experience)
I guess Google is more reasonable than German consulting firms :)
FWIW, my sense is that for business trips that last several weeks it is uncommon for companies to host several people in one hotel room, but I only have few data points on this, and maybe there is a US-Europe difference here.
(It is worth noting that one of my data points is about a part of the German federal bureaucracy which otherwise has fairly strict regulation regarding travel/accommodation expenses. There is literally a federal law about this, which may also be an interesting baseline more generally. It is notable that it allows first-class train rides for trips exceeding two hours, and while economy-class flights are mandated as default it does allow business class flights when there are specific “work-related reasons” for them.)
(To be clear, I do think that “running a fellowship in the Bahamas predictably leads to incurring higher costs for accommodation than you would in a place with a larger supply” is a fair point, and I would be sad if all EA events worldwide used that level of fanciness in accommodation for participants while ignoring available alternatives that may be cheaper without a commensurate loss in productivity/impact.
I just don’t think it’s a decisive argument against the Bahamas fellowship having been a good idea. Like I expect it’s among the top 5–10 but very likely not the top 1–3 considerations one would need to look at whether the Bahamas fellowship was overall worth it.
I expect the two of us are roughly on the same page about this.)
Yeah I agree with this. To be more specific, I think the biggest reasons the high costs/fanciness can be bad are as the OP says optics and epistemics (or more descriptively, “losing the spartan character of earlier EA is bad for our soul or something”), though the opportunity cost of the money is also non-trivial in absolute terms.
I regret damaging my argument by perhaps unrecognisable satire. And I meant to [satirically] allege 500 thread count sheets, not towels. Whether or not the flights were business class it’s that vs Zoom. And look, the hotel had a pool, yes? And fine dining means different things to different people, but the cost of the meals were likely about the same as the Michelin starred restaurant near me which I must start to frequent so I can think better how to be effectively altruistic. If that is satire I hope it’s hard hitting.
I am sorry to improperly or ambiguously identify the “charity”. It’s the Centre for Effective Altruism [UK 1149828]. I’ve amended my post to make that clear.
Thank you, I appreciate the clarification (and therefore upvoted your most recent comment).
(And yes, I think the hotel had a pool.)
Regarding the meals, I’m not sure if I ever ate in a Michelin-starred restaurant, but I looked up the prices at a Michelin-starred restaurant near Oxford (where I live), and it seems like a main course there is about twice as expensive as one in the restaurant attached to the relevant Bahamas hotel. (If I remember correctly, I had about two meals in that restaurant over the course of ~3 weeks. The other meals were catered office food of in my view lower ‘fanciness’ than what you get at the main EA office in Oxford or PlennyBars that I had brought from home.)
More broadly, it seems like we have pretty strong empirical and perhaps also value-based disagreements about when spending money can increase future impact sufficiently to be worth it.
I was not criticising any one particular event or any one person’s conduct. Indeed, I gave two examples of sponsorhip/bribe I would not accept.
One was a Bahamas Business-Class 5-star Fine Dining. I’m amazed that something like this actually occurs, the repudiation of this example of mine is that “it wasn’t quite as nice as that” but it was pretty damn fine.
The second example was the Bracknell 2nd-Class rail 2*star hotel pizza restaurant. Accepting a charity’s money for that is also unacceptable. It does seem the real Bahamas event which actually happened was almost as expensive as I posited (within a factor of 2, anyway) and much much more expensive (10 times?) than my Bracknell example, but such comparison is not made by anyone here other than me.
Michelin stars are awarded for fine dining, not on menu pricing. There are plenty of hotels which are nowhere near 5-star standard with restaurant prices exceeding those of my local Michelin starred restaurant. The point being made is that eating at such a place does not improve the effectiveness of one’s altruism. Indeed, it must have a negative impact, because mid-priced or more expensive that’s a charity’s money your accepting for fois gras instead of that money being spent on mosquito nets.
Such is the tone here that I expect the mention of fois gras to be the one that provokes response rather than the similarly forced feeding of the similarly willing [the geese volunteer too] EA bribe takers.
That my comments are voted down so heavily here shows maybe the ineptitude and rudeness of my writings, or it shows something else. Too many people want to be on this gravy train and are not being self-critical.
For what it’s worth, as someone that donates most of his income and is really uncomfortable around free-spending and fancy events, I think there are indeed some important concerns around this topic. I am grateful for parts of your comments.
But your focus on factually wrong examples, and even more so the very judgemental/aggressive wording in this comment (the worst one yet), is really hurting the argument and making it impossible to have a conversation :(
Could you try to tone it down and make it less emotionally charged, focusing more on your main point than on insults and arguments about Michelin-starred restaurants that no one is dining at?
I was right[, almost]. The issue becomes not the misuse of funds, it’s me saying [not “fois gras” but] “Michelin”.
Again, the examples were never meant to be read as what actually occurred any one event. But that my deliberately hyperbolic example is identified so very closely with a real actual event just makes my point even more strongly. OK, I got the colour of the wallpaper wrong, sorry, but there really was an all expenses paid luxury jolly to the Bahamas. It’s a scandal.
I note no one complains about the other factually wrong (because it too was made up) example about the much much cheaper 2-star plus pizza Bracknell event.
Again: I said there were two styles of event I would not be bribed to attend. Not only would I not attend the fine dining etc etc event, I would not attend the pizza etc etc one either. The response: “I wouldn’t call the [actual] Bahamas event fine dining.” “The towels weren’t unusually fine.” “I didn’t travel Business Class.”
Frankly it seems I don’t know anywhere near the extent of all this abuse of funds. This is just the tip of the iceberg. This is what happens to money you (if the cap fits etc) solicit from me.
Some people are uncomfortable being associated in any way with this. What is the behaviour modification required? Mine! How about a more general expression of discomfort from more people about this so-called effective so-called altruism? Silence is acquiescence.
I think there was an unfortunate misunderstanding. There was indeed an event in the Bahamas that had already been somewhat criticized, and people assumed you were referring to it.
I don’t know the details, but it was funded by the Bahamas billionaire Sam_Bankman-Fried FTX foundation, https://forum.effectivealtruism.org/posts/sdjcH7KAxgB328RAb/ftx-ea-fellowships
So it’s absolutely not money that was solicited from donors, or that someone donated thinking it would go to other causes.
I understand your shock and rage if you thought it was money donated for malaria bednets being misused, but that’s definitely not the case. The negative/adversarial language definitely did not help clearing this earlier.
Am I reading the situation correctly?
I agree with basically all you are saying here, Max, and thanks a lot for the thoughtful and detailed response to a not very constructive comment.
Just to clarify that it seems the claim that psb777 made on a “UK Effective Altruism charity that [...] withheld £7million+ in reserves” seems to be factually correct. This is likely CEA and looking at their public accounts they definitively have something like that (not sure where the exact 7mn figure comes from) in their unrestricted reserves funds.
A reserve fund like this is critical for operating any entity. Without it, you can’t really hire or make agreements or even perform basic planning responsibly.
It’s unfair, wild really, that this would be used rhetorically in the way it was.
The extra £7million+ retained is about half of the Centre for Effective Altruism’s incoming funds in the year ending mid-2020. I’ve run a commercial concern with a similar staff size and with far less income, and with practically zero on-going reserves, at least in comparison. “A reserve fund like this is” NOT “critical for operating any entity.” At the level of ongoing fixed expense, salaries and rental, that is 4 years existence guaranteed from these new reserves even if income was to become zero, and all staff retained.
Sitting on funds of this magnitude for the reasons stated in the annual report is incorrect behaviour. People are dying. Such comment is neither “unfair” nor [solely] “rhetorical” . The whole damned point of donations is that the funds taken must be used constructively, not retained for no other reason than to maintain the “charity” should funding dry up. Lack of funding should be the reason to fold, to recognise one has failed, not reason to use one’s reserves! Let me indulge in “unfair” rhetoric again, I only wish I could do it better: People are dying while these reserves are being sat on.
Some considerations:
A non-profit does not have the same cash flow or sources of income and this may require a different size reserve fund.
Not trying to “trip you up”, but you said your org had “far less income”—surely you know that reserve size is inherently correlated with income.
Under the current circumstances, many EA organizations are larger in scope (and I think more important) than they appear. For CEA, for example, there’s over 100 community groups to operationally/financially support! This growth is recent.
There’s probably more going on too that can’t easily be written about—CEA recently returned ~$50M (in expected? cashflow) to a major funder.
All this produces a soup on the books. Accounting is important and powerful but to amateurs or prejudice, I think it’s easy to produce noise.
The truth is that it’s hard to determine whether you are correct or not.
However, in what I think is a neutral statement, that is neither friendly nor unfriendly to you: based on your other comments, I’m skeptical of your judgement and attitude.
Maybe what is motivating you is principled negatives views on CEA’s activities. If that is the case, maybe it might work better to make that case directly (maybe with a new throwaway).
P.S : Uh, “patient Longtermism” might be a little upsetting to you.
Is there anything publicly accessible about this? I’m really interested in the current funding landscape, and how that would impact the marginal value of donations in vs outside of EA
So first of all, it’s good to tap the brakes here on my comments I’m making in this thread. It’s not clear I’m remotely informed or know anything about EA, much less informed about CEA.
For all we know, I got rejected from the recent EAG and there’s pictures of me on the security booth to make sure I didn’t sneak in.
So, yes! I think this is a great question. I’m referring to this:
https://forum.effectivealtruism.org/posts/xTWhXX9HJfKmvpQZi/cea-is-discontinuing-its-focus-university-programming
This post describes a CEA university program that was discontinued.
So below is my guess of the context or background of what happened:
This $50M program was one specific (and promising) instance of major effort on college campuses that would have brought on generations of new EA leaders, future donors, who would go to into altruism instead of say, Wall Street or corporate.
This is a larger program and in a natural way, requires a discrete commitment of money, up to $50M in one vision or phase. (Note that this money wasn’t necessarily transferred, but it seems that the dependencies of this or similar programs could change in reserve fund levels, which is why I mentioned it).
So what happened? The involved CEA team is really talented and working on a huge number of projects. At the same time, outside of CEA, at the moment, there happens to be talented EAs working on related projects. So at this particular time, for this particular program, the CEA staff decided that other people in EA could do this right now.
So they gave back the money for this program, voluntarily, instead of just using it to get more headcount, make themselves look bigger or something.
I think that, no matter what the actual need of funds that CEA has, this act of giving back the money and deciding others can do the program it is exactly what you want to see.
It’s unclear if this indicates anything about CEA’s funding or any funding situation in EA—but seems to suggest good governance and use of money at CEA.
Thanks!
Strongly agree!
You clearly know much more than I do. Even if things were more transparent it would still be hard to keep up with everything, so thank you for sharing your perspective and what you happen to know!
Retaining funds without charitable intent [for survival in the case of potential unknown reputational damage, for the potential loss of a donor] where those funds are beyond the reserves actually necessary is what I criticised about the UK charity CEA. That the related but separately run and separately managed USA CEA seems not to do that is something I too would applaud.
What amount of runway would you agree is justifiable?
I assume one year would be ok, at that scale?
I don’t know what the correct level would be other than the current level feels very wrong. They themselves give summary reasons for the very high reserves which seem unacceptable.
(1) The possible but unknown reputational damage they can’t really expect, or if they do so expect they ought not to build a fighting fund from my (potential*) charitable donations. Or if they’re expecting reputational damage a better strategy would be to change their behaviour.
(2) The potential loss of a big donor. When they lose a big donor that is the time to cut back on their funding of projects and their staff costs. High paid workers (and they are, this organisation) should not be protecting from losing their jobs in a downturn more than some commercial enterprise workers would be. Instead they take my (potential*) donations and set them aside for this purpose.
You’ll all be aware of the matching funds concept in charitable giving. If I give £10 then someone else guarantees to match this. Effectively my altruism is DOUBLED. This is a great concept and has encouraged me to give in the past. Lets see what’s happening here. The UK CEA takes half of it’s donations and sits on it. If I gave £10 to the UK CEA in 2020 only £5 is used. We’re not even talking about the necessary admin and infrastructure costs here. Effectively my altruism is HALVED.
___
(*) Why am I here? Thru hard work and luck I find myself with surplus assets and I thought I would give some of it away now, and some later. I wanted to do so constructively and was so pleased to find the EA crowd! I had already typed the (UK) CEA’s name and charity number into my draft will before I decided to properly check the hype for myself. I think I might be better off donating to OXFAM, their failings are distressing but the failings are human, they are not policy, and they’re embarrassed by them.
I’m uncomfortable criticising so anonymously. I’ve tried to find out how properly to identify myself here. I cannot edit my psb777 id and there seems to be nowhere to type in my name. I’ll stick it in the bio notes. Meanwhile I’m Paul Beardsell if you’re looking for whom to avoid.
I agree that based on what you’re posting so far, there are definitely better choices for you than UK CEA. Here are two lists by EA related projects:
https://www.givingwhatwecan.org/best-charities-to-donate-to-2022/#donate-to-reputable-and-effective-charities
https://www.thelifeyoucansave.org/best-charities/ (this one includes Oxfam!)
There is a lot of diversity of opinions in the EA movement on what’s exactly best to donate to, depending on each individual’s unique values.
GiveWell’s Maximum Impact Fund https://www.givewell.org/maximum-impact-fund is probably what you were looking for in the first place, it distributes 100% of the money to projects with the highest direct impact, and its employees are funded by external funds.
As far as I understand, the CEA “mission is to build a community of students and professionals acting on the principles of effective altruism, by creating and sustaining high-quality discussion spaces”.
So indeed they probably do “fancy” events (the Bahamas thing is completely unrelated, see other comment).
Which is probably something you do not find as valuable as more direct work. (And I personally would agree with you, and donated to New Incentives last year).
Don’t worry about criticizing “anonymously”, your name is the first thing that shows up when you Google “psb777″ anyway. If you want an admin to edit your visible name you can ask for help by clicking in the bottom right.
But please try to be more polite, everyone here is doing their best, if you don’t like how the CEA is using their money you should donate to a recommended charity indeed. Keep in mind that Oxfam seems to have similar salaries and ~15M£ in assets so, if this is something important to you, charities like the Against Malaria Foundation could be a better choice.
I don’t feel tripped up. Surely you know (your phraseology) that massive increase in reserves can only come from massive income, and that assessing the size of a necessary reserve has little to do with income but lots to do with fixed outgoings. I’m asserting the reserves are being kept for reasons stated in their own report which are not justifiable by a charity. The reserves are excessive. My point was to counter the false assertion above that such degree of reserves is necessary by any organisation.
Not critical. Not even usual. Not any entity. Not at 4+ years fixed operating expenses even if no further income ever arose and all staff were retained.
What motivates me is a horror of people making and accepting large expenses from charitable funds. That people accept large salaries to work in the charitable sector because perhaps they could earn as much elsewhere is a different question, it’s the expenses entertainment travel accommodation paid for and accepted instead of the mosquitto nets etc which is very distasteful.
The CEA was merely a prominent example of charitable funds being retained for non-charitable purposes. But I may take up your suggestion in that regard, thanks.
But your headline point, that I seemingly don’t understand “patient longtermism” is I think unwarranted. Two major reasons given by the CEA for the retention of massive reserves are (1) survival in the case of reputational damage and (2) the loss of a major donor.
Neither of these are good examples of “patient longtermism”. They’re bad examples.
I’m not sure I understand this position. Money is fungible. If you think it’s morally acceptable for EA charities to offer high salaries so people take less of a paycut to work in them, then it should also be acceptable for EA charities to offer other perks (directly financial or otherwise) to make their organizations more appealing to work for. (Alternatively, perhaps neither is morally acceptable). At any rate, I’m not sure why catered meals or flights is categorically different from high salaries here, and in the case of in-office meals and flights there’s at least a plausible business justification for them.
I also think this comment from someone else with experience in the (non-EA) charitable sector is illuminating:
I don’t understand why this comment is so heavily downvoted: While the tone of the comment might not be ideal and I don’t agree with many elements of the argument and I also don’t agree with the (implied) conclusion, I think it makes a generally valid point of criticism (that I assume lots of other outside people would share) that the EA community should acknowledge and take seriously rather than ignore.