Researcher at Giving What We Can.
Michael Townsend
Announcing the Longtermism Fund
GWWC’s 2020–2022 Impact evaluation (executive summary)
How I feel about my GWWC Pledge
Longtermism Fund: December 2022 Grants Report
The value of small donations from a longtermist perspective
Hiring retrospective: Research Communicator for Giving What We Can
Like other commenters, to back-up the tone of this piece, I’d want to see further evidence of these kinds of conversations (e.g., which online circles are you hearing this in?).
That said, it’s pretty clear that the funding available is very large, and it’d be surprising if that news didn’t get out. Even in wealthy countries, becoming a community builder in effective altruism might just be one of the most profitable jobs for students or early-career professionals. I’m not saying it shouldn’t be, but I’d be surprised if there weren’t (eventually) conversations like the ones you described. And even if I think “the vultures are circling” is a little alarmist right now, I appreciate the post pointing to this issue.
On that issue: I agree with your suggestions of “what not to do”—I think these knee-jerk reactions could easily cause bigger problems than they solve. But what are we to do? What potential damage could there be if the kind of behaviour you described did become substantially more prevalent?
Here’s one of my concerns: we might lose something that makes EA pretty special right now. I’m an early-career employee who just started working at an EA org . And something that’s struck me is just how much I can trust (and feel trusted by) people working on completely different things in other organisations.
I’m constantly describing parts of my work environment to friends and family outside of EA, and something I often have to repeat is that “Oh no, I don’t work with them—they’re a totally different legal entity—it’s just that we really want to cooperate with each other because we share (or respect the differences in) each other’s values”. If I had to start second-guessing what people’s motives were, I’m pretty sure I wouldn’t feel able to trust so easily. And that’d be pretty sad.
Longtermism Fund: August 2023 Grants Report
Why Giving What We Can recommends using expert-led charitable funds
What We Owe The Future: A review and summary of what I learned
I’d just like to give a shotout to the organisers for their great work!
I don’t think anyone appreciates how hard running a conference can be at the best of times. But on Mars, the logistical difficulties are on another planet: the organisers have had astronomical health and safety challenges, and don’t get them started on the availability of vegan catering…
I appreciate the point of your story, Nuño, but I don’t think it fairly characterises my post, and I think its dismissiveness is unwarranted.
For one, I didn’t suggest that, from a longtermist perspective, “the optimal thing to promote was earning to give.” I explicitly said the opposite here:
...my personal all-things-considered view is pretty similar to Ben’s: when someone has a good personal fit for high-impact direct work, they’re likely to have more impact pursuing that than earning to give. This view is also shared by Giving What We Can leadership.
And in general, I quite repeatedly indicate that my argument does not make claims about the value of effective giving compared to direct work. Promoting effective giving is not the same thing as promoting earning to give.
So I think your story, though humorous and (I take it) coming from a place of love, is directed at something I’m not saying.
I think this post does a great job of capturing something I’ve heard from quite a few people recently.
Especially for longtermist EAs, it seems direct work is substantially more valuable relative to donations than it was in the past, and I think your thought experiment about the number of GWWC pledges it’d make sense to trade for one person working on an 80k priority pathway is a reasonably clear way of illustrating that point.
But I think that this is a false dilemma (as you suggest it might be). This isn’t just because I doubt that the pledge (or effective giving generally) are in tension, but because I think they’re mutually supportive. Effective giving is a reasonably common way to enter the effective altruism community. Noticing that you can have an extraordinary impact with donations — which, even from a longtermist perspective, I still think you can have — can inspire people to begin to taking action to improve the world, and potentially continue onto working directly. I think historically it’s been a pretty common first step, and though I anticipate more direct efforts to recruit highly engaged EAs to become relatively more prominent in future, I still expect the path from effective giving --> priority path career, to continue much more often than effective giving --> someone not taking a priority path.
I’ve heard a lot of conflicting views on whether the above is right; it seems quite a few people disagree with me, and think there’s much more of a tension here than I do, and I’d be interested to hear why. (For disclosure, I work at GWWC and personally see getting more people into EA as one of the main ways GWWC can be impactful).
I suppose the upshot on this, if I’m right, is that the norm that “10% and you’re doing your part” can continue, and it’s not so obvious that it’s in tension with the fact that doing direct work may be many times more impactful. While it may be uncomfortable that there are significant differences in the impactfulness of members of the community, I think this is/was/always will be the case.
Another thing worth adding is that I think there’s also room for multiple norms on what counts as “doing your part”. For example, I think you should also be commended and feel like you’ve done your part if you apply to several priority paths, even if you don’t get one / it doesn’t work out for whatever reason. Maybe Holden’s suggestion of trying to get kick-ass at something, while being on standby to use your skill for good, could be another.
By way of conclusion, I feel like what I’ve written above might seem dismissive of the general issue that EA has yet to figure out — given the new landscape — how to think about demandingness. But I really think there is something to work out here, and so I really interesting this post for raising it quite explicitly as an issue.
As a former applicant for many EA org roles, I strongly agree! I recall spending on average 2-8 times longer on some initial applications than was estimated by many job ads.
As someone who just helped drive a hiring process for Giving What We Can (for a Research Communicator role) I feel a bit daft having experienced it on the other side, but not having learned from it. I/we did not do a good enough job here. We had a few initial questions that we estimated would take ~20-60 minutes, and in retrospect I now imagine many candidates would have spent much longer than this (I know I would have).
Over the coming month or so I’m hoping to draft a post with reflections on what we learned from this, and how we would do better next time (inspired by Aaron Gertler’s 2020 post on hiring a copyeditor for CEA). I’ll be sure to include this comment and its suggestion (having a link at the end of the application form where people can report how long it actually took to fill the form in) in that post.
Speaking personally, I have also perceived a move away from longtermism, and as someone who finds longtermism very compelling, this has been disappointing to see. I agree it has substantive implications on what we prioritise.
Speaking more on behalf of GWWC, where I am a researcher: our motivation for changing our cause area from “creating a better future” to “reducing global catastrophic risks” really was not based on PR. As shared here:
We think of a “high-impact cause area” as a collection of causes that, for donors with a variety of values and starting assumptions (“worldviews”), provide the most promising philanthropic funding opportunities. Donors with different worldviews might choose to support the same cause area for different reasons. For example, some may donate to global catastrophic risk reduction because they believe this is the best way to reduce the risk of human extinction and thereby safeguard future generations, while others may do so because they believe the risk of catastrophes in the next few decades is sufficiently large and tractable that it is the best way to help people alive today.
Essentially, we’re aiming to use the term “reducing global catastrophic risks” as a kind of superset that includes reducing existential risk, and that is inclusive of all the potential motivations. For example, when looking for recommendations in this area, we would be happy to include recommendations that only make sense from a longtermist perspective. A large part of the motivation for this was based on finding some of the arguments made in several of the posts you linked (including “EA and Longtermism: not a crux for saving the world”) compelling.
Also, our decision to step down from managing the communications for the Longtermism Fund (now “Emerging Challenges Fund”) was based on wanting to be able to more independently evaluate Longview’s grantmaking, rather than brand association.
We’ll release payout
reports each quarterwhen we disburse funds (likely bi-annually). The exact format/style hasn’t yet been determined, but we’re aiming to explain the reasoning behind each grant to donors.
- 21 Dec 2022 1:28 UTC; 103 points) 's comment on Bad Omens in current EA Governance by (
Thanks for your questions!
As Linch suggests, opportunities that seem promising but aren’t sufficiently legible can be referred to other funders to investigate.
We reached out to staff at Open Philanthropy about setting up this fund, and received positive feedback. The EA Funds team (with input from LTFF grant managers at the time) had also previously considered setting up a “Legible Longtermism Fund” — my understanding is the reason they didn’t was due to lack of capacity, but they were in favour of the idea.
Whether the best opportunities are sufficiently legible is an interesting question:
It may depend on whether you look at it in terms of cost-effectiveness, or total benefit:
In pure cost-effectiveness terms:
I think I may share your intuitions that some of the smaller grants the Long-Term Future Fund makes might be more cost-effective than the typical grant I expect the Longtermism Fund to make (though, it’s difficult to evaluate this in advance of the Longtermism Fund making grants!).
Though, we anticipate the Longtermism Fund’s requirement for legibility might, in some cases, be beneficial to cost-effectiveness. For example, we anticipate some organisations to prefer receiving grants from the Longtermism Fund (as it’s democratically funded and highly legible) than other funders. Per his comment, Caleb (from EA Funds) and a reviewer from OP share this view.
In total benefit terms:
My intuition, informed by just double-checking Open Phil’s and FTX FF’s respective grants databases, is that a significant amount of longtermist grantmaking goes to work that would be sufficiently legible for this fund to support.
There therefore seems to me to be plenty of sufficiently legible work to support.
My bottomline view is the effect of the fund will be to:
Increase the total amount of funding going to longtermist work. This may be especially important if longtermism manages to scale up significantly and funding requirements increase (e.g., successful megaprojects).
Changing the proportion of funding to legible/illegible opportunities provided by individual donors/large funders (i.e., the proportion of funding going to legible work provided by individual donors will increase).
Provide a funder that may be favourable to grantees who want to be funded by something democratically supported/highly legible.
I don’t think it’s ‘screening off’ opportunities that don’t fit meet its legibility requirement will make it more difficult for those organisations to receive funding.
Worth noting that I’m speaking as a Researcher at GWWC, whereas Longview is primarily responsible for grantmaking.
Thanks for posting this—as the other comments also suggest, I don’t think you’re alone in feeling a tension between your conviction of longtermism and lack of enthusiasm for marginal longtermist donation opportunities.
I want to distinguish between two different ways at approaching this. The first is simply maximising expected value, the second is trying to act as if you’re representing some kind of parliament of different moral theories/worldviews. I think these are pretty different. [1]
For example, suppose you were 80% sure of longtermism, but had a 20% credence in animal welfare being the most important issue of our time, and you were deciding whether to donate to the LTFF or the animal welfare fund. The expected value maximiser would likely think one had a higher expected value, and so would donate all their funds to that one. However, the moral parliamentarian might compromise by donating 80% of their funds to the LTFF and 20% to the animal welfare fund.
From this comment you left:
I’m not convinced small scale longtermist donations are presently more impactful than neartermist ones, nor am I convinced of the reverse. Given this uncertainty, I am tempted to opt for neartermist donations to achieve better optics.
I take it that you’re in the game of maximising expected value, but you’re just not sure that the longtermist charities are actually higher impact than the best available neartermist ones (even if they’re being judged by you, someone with a high credence in longtermism). That makes sense to me!
But I’m not sure I agree. I think there’d be something suspicious about the idea that neartermism/longtermism align on which charities are best (given they are optimising for very different things, it’d be surprising if they turned out with the same recommendation). But more importantly, I think I might just be relatively more excited about the kinds of grants the LTFF are making than you are, and also more excited about the idea that my donations could essentially ‘funge’ open philanthropy (meaning I get the same impact as their last dollar).
I also think that if you place significant value on the optics of your donations, you can always just donate to multiple different causes, allowing you to honestly say something like “I donate to X, Y and Z—all charities that I really care about and think are doing tremendous work” which, at least in my best guess, gets you most of the signalling value.
Time to wrapup the lengthy comment! I’d suggest reading Ben Todd’s post on this topic, and potentially also the Red-Team against it. I also wrote “The value of small donations from a longtermist perspective” which you may find interesting.
Thanks again for the post, I appreciate the discussion it’s generating. You’ve put your finger on something important.
- ^
At least, I think the high-level intuition behind each of these mental models are different. But my understanding from a podcast with Hilary Greaves is that when you get down to trying to formalise the ideas, it gets much murkier. I found these slides of her talk on this subject, in case you’re interested!
- ^
Thanks for the thoughtful comment.
I think there’s a strong theoretical case in favour of donation lotteries — Giving What We Can just announced our 2022/2023 lottery is open!
I see the case in favour of donation lotteries as relying on some premises that are often, but not always true:
Spending more time researching a donation opportunity increases the expected value of a donation.
Spending time researching a donation opportunity is costly, and a donation lottery allows you to only need to spend this time if you win.
Therefore, all else equal, it’s more impactful (in expectation) to have a 1% chance of spending 100 hours to decide where $100,000 should go than it is to have a 100% chance spending 1 hour to decide where $1,000 to go.
And donation lotteries provide a mechanism to do the more impactful thing.
Some of these don’t hold for many donors, and there are some additional considerations which undermine the value of lotteries:
Some donors may not feel confident that they can do much better with more time invested. They may even feel averse about the amount of money they’d affect if they won(even if ex-ante they influenced $X either way). They stand less to gain from donations lotteries because of this.
Choosing to donate to a donation lottery is not costless. For example, it may take a similar amount of time/resources to evaluate which fund they think is highest impact, as it would to understand and trust donation lotteries. This takes away some of the advantage of a donor lottery.
For some donors, there’s there may be more advocacy potential in giving to a fund supported by a reputable evaluator, than a donation lottery.
I’d like to flag that I’m a little more reticent about putting too much weight on this consideration. Leaning too much into ‘advocacy potential’ (rather than just doing what’s straightforwardly effective) seems slippery. But I think it’d be a mistake to ignore this consideration.
A substantial amount of our traffic comes from people who are completely unfamiliar with effective altruism (e.g.., people who just googled “Best charities” or just used our “How Rich Am I?” calculator) and I think funds are a better option for most of this audience (though perhaps for EA Forum users, it’s a different story, so I really appreciate pushback here!).
Overall, I think if Giving What We Can changed its default recommendation from funds to donation lotteries, we’d be having less impact.
Though we see funds as the best default option, we would like to provide additional guidance on when it makes sense to choose other options. I’ve made a small edit to the version of this post on our website to acknowledge that donor lotteries could be a compelling alternative. My sense is that donor lotteries would be a better option than funds for someone who:
Understands the arguments in favour of a donor lottery, and also the mechanisms for how it works.
Would be able to donate cost-effectively if they spent more time on their decision.
Would be able to spend that time in the event of winning.
I also have a few thoughts about this comment in particular:
For example, I think it would be healthy if funds were accountable to a smaller number of randomly selected donors who had the time to investigate more deeply, rather than spending <10% as much time and being more likely to pick based on a quick skim of fund materials and advertising/social dynamics/etc. And it seems like there’s no way to escape from that regress by having GWWC evaluate evaluators, since then the donor must evaluate GWWC’s evaluations. From this perspective a donor lottery is really like a “free lunch” that’s hard to get in other ways.
Speaking personally, I’d also prefer fewer donors conducting deeper investigations of funds than a larger number conducting more shallow investigations. I think this is a very good consideration in favour of donation lotteries.
Speaking on behalf of Giving What We Can: though our work “evaluating the evaluators” will inform our recommended funds and charities (to provide a stronger basis for our recommendations) we are also motivated to make it easier for donors to choose which evaluators and funds they rely on by providing resources on the values implicit in their methodology + pointing to some potential strengths/weaknesses of their methodology.
Put another way, our vision for next year is to help:Provide strong default options for donors, with a reasonable justification for those defaults. (i.e., they’re supported by a trusted evaluator who we investigated).
Provide the tools for donors to choose the best fund or charity given their values and worldview.
Hi Ludwig, thanks for raising some of these issues around governance. I work on the research team at Giving What We Can, and I’m responding here specifically to the claims relating to our work. There are a few factual errors in your post, and other areas I’d like to add additional context on. I’ll touch on:
Our recommendations (we do disclosure conflicts of interest).
The Longtermism Fund specifically (payout reports are about to be published).
Our relationship with EVF (we set our own strategy, independently fundraise, and have little to do with most organisations under EVF).
#1 Recommendations
With respect to our recommendations: They are determined by our inclusion criteria which we regularly link to (for example, on our recommended charities page and on every charity page). As outlined in our inclusion criteria, we rely on our trusted evaluators to determine our giving recommendations. Longview Philanthropy and EA Funds are two of the five trusted evaluators we relied on this giving season. We explicitly outline our conflict of interests with both organisations in our trusted evaluators page.
We want to provide the best possible giving recommendations to our donors. Unfortunately, given we are very connected to the effective giving ecosystem — and as you highlighted, part of EVF — this is regularly in tension with avoiding conflicts of interest. We did our best this giving season to highlight these conflicts, and justify why we chose the evaluators we did, but we want to do better next year (we touch on this in our most recent announcement of our new research direction).
#2 The Longtermism Fund
The fund will disclose all of its spending in regular payout reports. Its first report will be released shortly (by the end of today! It’s been in production over the past weeks).
As shared in our announcement of the fund, the fund is a collaboration between Giving What We Can and Longview. We (GWWC) are responsible for the communications around the fund; Longview are responsible for the grantmaking and research.
We also publicly committed to sharing reports outlining the funds grants in our announcement of the fund.
#3 Relationship with EVF
Giving What We Can initially helped create EVF’s predecessor (CEA) back in 2011, alongside 80,000 Hours — read more about its history here. In short, EVF currently provides GWWC with:
Operational support (e.g., finance, legal, HR) via EV Ops.
Board of Trustees (of which each organisation has historical had its own “Active Trustee” who has worked closely with the respective organisation’s leader on strategy and management).
Shared privacy policy (this facilitates a single sign-on for GWWC, EA Forum and EA Global).
Some limited shared communications and facilities (e.g., some shared Slack channels, Notion spaces, and access to Trajan House—though nobody at GWWC currently uses this).
Importantly, GWWC independently:
Fundraises for its core expenses (i.e., we independently seek funding to pay for our staff and costs).
Sets its own strategy (we work as a team consulting GWWC members and other stakeholders to decide how we can have the most impact), does its own hiring, etc. See our most recent strategy update where we were seeking community feedback on our plans.
Independently choose its approach to giving recommendations — we receive no benefit for recommending organisations within EVF; historically, we err on the side of avoiding this due to perceived/potential conflicts of interest).
Happy to clarify any of the above.