Program Associate on Open Philâs Global Catastrophic Risks Capacity Building team.
đ¸ GWWC Pledger
Michael Townsendđ¸
(I no longer work at GWWC, but wrote the reports on the LTFF/âECF, and was involved in the first round of evaluations more generally.)
In general, I think GWWCâs goal here is to âto support donors in having the highest expected impact given their worldviewâ which can come apart from supporting donors to give to the most well-researched/âvetted funding opportunities. For instance, if you have a longtermist worldview, or perhaps take AI x-risk very seriously, then Iâd guess youâd still want to give to the LTFF/âECF even if you thought the quality of their evaluations was lower than GiveWellâs.
Some of this is discussed in âWhy and how GWWC evaluates evaluatorsâ in the limitations section:
Finally, the quality of our recommendations is highly dependent on the quality of the charity evaluation field in a cause area, and hence inconsistent across cause areas. For example, the state of charity evaluation in animal welfare is less advanced than that in global health and wellbeing, so our evaluations and the resulting recommendations in animal welfare are necessarily lower-confidence than those in global health and wellbeing.
And also in each of the individual reports, e.g. from the ACE MG report:
As such, our bar for relying on an evaluator depends on the existence and quality of other donation options we have evaluated in the same cause area.
In cause areas where we currently rely on one or more evaluators that have passed our bar in a previous evaluation, any new evaluations we do will attempt to compare the quality of the evaluatorâs marginal grantmaking and/âor charity recommendations to those of the evaluator(s) we already rely on in that cause area.
For worldviews and associated cause areas where we donât have existing evaluators we rely on, we expect evaluators to meet the bar of plausibly recommending giving opportunities that are among the best options for their stated worldview, compared to any other opportunity easily accessible to donors.
First just wanted to say that this:
In my first year after taking the pledge, I gave away 20% of my income. However I had been able to save and invest much of my disposable income from my relatively well paid career before taking the pledge and so had built up strong financial security for myself and my family. As a result, I increased my donations over time and since 2019, have given away 75% of my income.
...is really inspiring :).
Iâm interested in knowing more about how Allan decides where to donate. For example:I currently split my donations between the Longview Philanthropy Emerging Challenges Fund and the Long Term Future Fund- I believe in giving to funds and letting experts with much more knowledge than me identify the best donation opportunities.
How did Allan arrive at this decision, and how confident does he feel in it? Also, how connected does Allan feel with other EtGâers who are giving a similar amounts based on a similar worldview?
Just on this point:
Relatedly, it feels like this is not what the username field is for. If Iâm interacting with someone on some topic unrelated to my advocacy it feels intrusive and uncooperative to be bringing it into the conversation
I think this argument might have a lot of power among folks who tend to think of social norms in quite explicit/âanalytical terms, and who put a lot of emphasis on being cooperative. But I suspect relatively few people will see this as uncooperative/âintrusive, because the pin and the idea itâs advocating are pretty non-offensive.
Luke, thank you for everything youâve done for GWWC and the world.
I donât think many people get to meet someone with such extraordinary levels of care, both for those far away in space/âtime, and loved ones nearby. While I was working at GWWC, Lukeâs most common reason to take a day off was to help someone move house. Luke, your kindness, integrity and commitment are contagious â even with you no longer at the organization, those virtues will stay with GWWC in large part because of how you demonstrated them.
Hereâs a graph of new 10% Pledgers since Luke joined GWWC.[1]
- ^
Courtesy of Claude⌠though my critical feedback for it is that it makes it look like 2024 hasnât happened yet, and that Luke joined in 2019. Both false!
- ^
Unfortunately, I canât see that option â it just displays my email.
Thanks for sharing this, and more importantly, for writing it. From my perspective, this is the best reporting on AI that Iâve seen. Iâve shared it with previously ultra-sceptical friends, and had an uncharacteristically positive response.
Hi Vasco â not all organisations shared permission to have their name shared, but it includes many of the fundraising organisations on this list.
Giving What We Can is looking for a Researcher to help us identify the most effective donation opportunities for a variety of worldviews, and recommend these to our donors.
Salary and benefits: Salary for candidates is based on this calculator (which is explained in more depth here). Benefits and policies depend on location, but we aim to provide benefits equally wherever we can. See here for an example of an offer to a US candidate â we have similar benefits in other locations.
Location: Remote.
Apply here by
February 18thMarch 2ndEssential skills, traits and experience
A passion for making the world a better place
Strong grasp of key research concepts in effective giving and effective altruism
A scout mindset and a strong commitment to communicating accurately and truthfully
Excellent analytical skills; comfort working with quantitative and qualitative frameworks
Probabilistic reasoning and calibration; the ability to think in a Bayesian way
Prioritisation and judgement; the ability to (re-)focus your time on whatâs most important and relevant
Ability to work both autonomously and in a team; comfort with working remotely
Desirable skills, traits and experience
Experience in impact-focused charity evaluation and/âor grantmaking
Great written communication skills, especially in translating technical information to be understandable and compelling to the public
Generalist skills and flexibility and adaptability to other types of valuable work needed at various times, in addition to the responsibilities mentioned above
About GWWC: GWWC is on a mission to create a world in which giving effectively and significantly is a cultural norm. The GWWC team is hard-working and mission-focused, with a culture of open and honest feedback. We also like to think of ourselves as a particularly friendly and optimistic bunch.
In all our work, we strive to take a positive and collaborative attitude, be transparent in our communication and decision-making, and adopt a scout mindset to guide us towards doing the most good we can do, including by evaluating our own impact and learning from the results. To learn more, check out our current strategy.
The process
Our hiring process involves four stages (applicants will be compensated for their time spent on stages 2-3):
Application form (~2 hours)
Written work test (~2 hours)
Work trial (~10 hours)
Interview, online over Zoom (~1 hour) and reference checks.
We will close applications on the 2nd of March and aim to make an offer (provided we find a suitable candidate) by the end of April, with the successful candidate starting as soon as possible thereafter.
See also here to get a sense of our approach from a hiring round we did in early 2022.
Where are the GWWC team donatÂing in 2023?
How I feel about my GWWC Pledge
Thanks Vasco, this is good feedback.
To better reflect how your different recommendations are linked to particular worldviews, I think it would be good to change the name of your area/âfund âglobal health and wellbeingâ to âglobal human health and wellbeingâ
We considered a wide variety of names, and after some deliberation (and a survey or two), we landed on âglobal health and wellbeingâ because we think it is a good balance of accurate and compelling. I agree with some the limitations you outlined, and like your alternative suggestion, especially from a âResearcherâsâ point of view where Iâm very focused on. Iâll share this with the team, but I expect that there would be too much cost switch at this point.
However, I wonder how much of your and Sjirâs views are being driven by path dependence. [...] Given this, I worry you may hesitate to recommend interventions in animal welfare over human welfare even if you found it much more plausible that both areas should be assessed under the same (impartial welfarist) worldview.
Itâs a bit tricky to respond to this having not (at least yet) done an analysis comparing animal versus human interventions. But for if/âwhen we do, I agree it would be important to be aware of the incentives you mentioned, and to avoid making decisions based on path dependencies rather than high quality research. More generally, a good part of our motivation for this project was to help create better incentives for the effective giving ecosystem. So weâd see coming to difficult decisions on cause-prioritisation, if we thought they were justified, as very much within the scope of our work and a way it could add value.
Hi Rebecca â we did not look into The Life You Can Save for this round. As shared here we only looked into the six evaluators/âfunds listed in this post, and in our âWhy and how GWWC evaluates evaluatorsâ we shared how we decided which evaluators to prioritise. Itâs too soon to say which evaluators weâll look into next, though we can share that our current inclination is that looking into Founders Pledgeâs research, and expanding the cause areas we include (like climate change, or âmetaâ work) is a particularly high priority.
Thanks Nick! It was really illuminating for me personally to look under the hood of GW, and Iâm glad you appreciated our summary of the work.
In this round of evaluations, we only looked into Animal Charity Evaluators, GiveWell, Happier Lives Institute, EA Fundsâ Animal Welfare Fund and Long-Term Future Fund, and Longviewâs Emerging Challenges Fund. In future evaluations, we would like to look into Founders Pledgeâs work, climate change more generally, and other evaluators. Itâs too soon to commit to which, and in which order, just yet.
Also, did you evaluate GWâs Top Charity Fund of All Grants Fund?
Jonasâ reply is spot on here â we essentially looked into both and to GW more generally.
- 29 Nov 2023 3:07 UTC; 8 points) 's comment on GWWCâs evalÂuÂaÂtions of evaluators by (
There is definitely substantial overlap between the four funds you listed; and especially on GWWCâs fund and EA Fundsâ. In principle, it doesnât have to be this way:
GWWCâs Global Health and Wellbeing Fund could potentially grant based on evaluations other than GW (e.g., potentially from Founders Pledge, or Happier Lives Institute, etc., depending on how our subsequent evaluations go).
EA Fundsâ Global Health and Development Fund could similarly appoint new advisors, or change its scope. But I canât speak on behalf of EA Funds!
GWâs Top Charities Fund and All Grants Fund do make different grants, with the latter having a wider scope, but there is overlap.
Have you considered allocating the donations made to the GWWC Global Health and Wellbeing Fund to GWâs funds?
We expect that in effect this will be what happens. That is, we expect GW to advise our fund as a proxy for the All Grants Fund. Operationally, itâs better for us to directly grant to the organisations based on GWâs advice (rather than, for example, sending the money to GW to regrant it) so that, among other reasons, charities can receive the money sooner. We already have this process setup for donations made to GWâs funds on our platform. This means that, at least right now, giving to either the All Grants Fund, or our cause area fund, will have the same effect. But as above, this could change based on future evaluations of evaluators, which we see as a feature for donors who want to setup recurring donations to track our latest research.
Thanks Peter, and weâd of course like to extend the thanks back to HLI for being such an excellent collaborator here! Congratulations on publishing your new research. Iâm eager to read more about it over the coming weeks and hopefully to dive into it in more detail next year in our next round of evaluations.
There is a relatively small comparison class here; we often say weâre focused on âimpact-focusedâ evaluators. Here is our database of evaluators we know of that we might consider in this reference class. In the medium to long run, however, we could imagine there being value in investigating an evaluator completely outside EA, with a very different approach. There could be some valuable lessons of best-practice, and other insights, that could make this worthwhile. I expect we probably wonât prioritise this until we have looked into more impact-focused evaluators.
Hi wes R, Iâll answer your questions in this comment!
The impact measurements greatly varied by evaluator. For example, GW makes decisions using its âmoral weightsâ (which primarily measures consumption and health outcomes, but I donât believe in a way that neatly reduces to QALYs). Meanwhile, HLI uses âWELLBYsâ. Other evaluators used different measurements at different times, or relied on subjective scores of cost-effectiveness. You can read more about these in our evaluations (linked to here).Iâm not sure we have much in the way of a generalised view of which metrics we think should be used or not. In general:
These metrics should help support making more cost-effective recommendations and grants.
To the extent they do, weâre happy to see them!
In some cases, metrics might end up forcing over-precision in a way that is not particularly helpful. In these cases, we think it could be more sensible to take a more subjective approach.
Hope that helps!
This is a really insightful question!
I think itâs fair to characterise our evaluations as looking for the âbestâ charity recommendations, rather than the best charity evaluators, or recommendations that reach a particular standard but that are not the best. Though weâre looking to recommend the best charities, we donât think this means that thereâs no value in looking into âgreat-charity evaluatorsâ as you called them. We donât have an all-or-nothing approach when looking into an evaluatorsâ work and recommendation and can choose to only include the recommendations from that evaluator that meet our potentially higher standard. This means, so long as itâs possible some of the recommendations of a âgreat-charity evaluatorâ are the best by a particular worldview, weâd see value in looking into them.
In one sense, this increases the bar for our evaluations, but in another it also means an evaluatorâs recommendations might be the best even if we werenât particularly impressed by the quality of the work. For example, suppose there was a cause area for which there was only one evaluator, the threshold for this evaluating being the best may well be: they are doing a sufficiently good job that there is a sufficiently plausible worldview by which donating via their recommendations is still their best option (i.e., compared to donating to the best evaluator in another area).
Itâs too early to commit to how we will approach future evaluations, however, we currently lean towards sticking with the core idea of focusing on helping donors âmaximiseâ expected cost-effectiveness, rather than âmaximisingâ the number of donors giving cost-effectively /â providing a variety of âgreat-but-not-bestâ options.
You might also explicitly state that you donât intend to evaluate great-charity recommenders at least at this time.
As above, we would see value in looking at charity evaluators who take an approach of recommending everything above a minimum standard, but we would only look to follow the recommendations we thought were the best (...by some sufficiently plausible worldview).
but would recommend making it clear in posts and webpages that you are evaluating best-charity evaluators under standards appropriate for best-charity evaluators
Iâd be interested in where you think we could improve our communications here. Part of the challenge weâve faced is that we want to be careful not to overstate our work. For example, âwe only provide recommendations from the best evaluators we know of and have looked intoâ, is accurate, but âwe only provide recommendations from the best evaluatorsâ is not (because there are evaluators we havenât looked into yet). Another challenge is to not overly qualify everything we say, to the point of being confusing and inaccessible to regular donors. Still, after scrolling through some of our content, I think we could find a way to thread this needle better as it is an important distinction to emphasise â we also donât want to understate our work!
Just speaking for myself, Iâd guess those would be the cruxes, though I donât personally see easy-fixes. I also worry that you could also err on being too cautious, by potential adding warning labels that give people an overly negative impression compared to the underlying reality. Iâm curious if there are examples where you think GWWC could strike a better balance.
I think this might be symptomatic of a broader challenge for effective giving for GCR, which is that most of the canonical arguments for focusing on cost-effectiveness involve GHW-specific examples, that donât clearly generalize to the GCR space. But I donât think that indicates you shouldnât give to GCR, or care about cost-effectiveness in the GCR space â from a very plausible worldview (or at least, the worldview I have!) the GCR-focused funding opportunities are the most impactful funding opportunities available. Itâs just that the kind of reasoning underlying those recommendations/âevaluations are quite different.