Program Associate on Open Philâs Global Catastrophic Risks Capacity Building team.
đ¸ GWWC Pledger
Michael Townsendđ¸
Hi Phib, Michael from the GWWC Research team here! In our latest impact evaluation we did need to consider how to think about future donations. We explain how we did this in the appendix âOur approach to discount ratesâ. Essentially, itâs a really complex topic, and youâre right that existential risk plays into it (we note this as one of the key considerations). If you discount the future just based on Ordâs existential risk estimates, based on some quick-maths, the 1 in 6 chance over 100 years should discount each year by 0.2% (1 - ((1 â 1â6)^(1/â100)) = 0.02).
Yet there are many other considerations that also weigh into this, at least from GWWCâs perspective. Most significantly is how we should expect the cost-effectiveness of charities to change over time.
We chose to use a discount rate of 3.5% for our best-guess estimates (and 5% for our conservative estimates); based on the recommendation from the UK governmentâs green book. We explain why we made that decision in our report. It was largely motivated by our framework of being useful/âtransparent/âjustifiable over being academically correct and thorough.
If youâre interested in this topic, and on how to think about discount rates in general, you may find Founders Pledgeâs report on investing to give an interesting read.
Hi Joel â great questions!
(1) Are non-reporters counted as giving $0?
Yes â at least for recorded donations (i.e., the donations that are within our database). For example, in cell C41 of our working sheet, we provide the average recorded donations of a GWWC Pledger in 2022-USD ($4,132), and this average assumes non-reporters are giving $0. Similarly, in our âpledge statisticsâ sheet, which provides the average amount we record being given per Pledger per cohort, and by year, we also assumed non-reporters are giving $0.
(2) Does this mean we are underestimating the amount given by Pledgers?
Only for recorded donations â we also tried to account for donations made but that are not in our records. We discuss this more here âbut in sum, for our best guess estimates, we estimated that our records only account for 79% of all pledge donations, and therefore we need to make an upwards adjustment of 1.27 to go from recorded donations to all donations made. We discuss how we arrived at this estimate pretty extensively in our appendix (with our methodology here being similar to how we analysed our counterfactual influence). For our conservative estimates, we did not make any recording adjustments, and we think this does underestimate the amount given by Pledgers.
(3) How did we handle nonresponse bias and could we handle it better?
When estimating our counterfactual influence, we explicitly accounted for nonresponse bias. To do so, we treated respondents and nonrespondents separately, assuming a fraction of influence on nonrespondents compared to respondents for all surveys:50% for our best-guess estimates.
25% for our conservative estimates.
We actually did consider adjusting this fraction depending on the survey we were looking at, and in our appendix we explain why we chose not to in each case. Could we handle this better? Definitely! I really appreciate your suggestions here â we explicitly outline handling nonresponse bias as one of the ways we would like to improve future evaluations.
(4) Could we incorporate population base rates of giving when considering our counterfactual influence?
Iâd love to hear more about this suggestion, itâs not obvious to me how we could do this. For example, one interpretation here would be to look at how much Pledgers are giving compared to the population base rate. Presumably, weâd find they are giving more. But Iâm not sure how we could use that to inform our counterfactual influence, because there are at least two competing explanations for why they are giving more:One explanation is that we are simply causing them to give more (so we should increase our estimated counterfactual influence).
Another is that we are just selecting for people who are already giving a lot more than the average population (in which case, we shouldnât increase our estimated counterfactual influence).
But perhaps Iâm missing the mark here, and this kind of reasoning/âanalysis is not really what you were thinking of. As I said, would love to hear more on this idea.
(Also, appreciate your kind words on the thoroughness/ârobustness)
Thanks :)!
You can see in the donations by cause area a breakdown of the causes pledge and non-pledge donors give to. This could potentially inform a multiplier for the particular cause areas. I donât think we considered doing this, and am not sure itâs something weâll do in future, but weâd be happy to see othersâ do this using the information we provide.Unfortunately, we donât have a strong sense in how we influenced which causes donors gave to; the only thing that comes to mind is our question: âPlease list your best guess of up to three organisations you likely would *not* have donated to if Giving What We Can, or its donation platform, did not exist (i.e. donations where you think GWWC has affected your decision)â the results of which you can find on page 19 of our survey documentation here. Only an extremely small sample of non-pledge donors responded to the question, though. Getting a better sense of our influence here, as well as generally analysing trends in which cause areas our donors give to, is something weâd like to explore in our future impact evaluations.
Ah, I can see what you mean regarding our text, I assume in this passage:
We want to emphasise that this data surprised us and caused us to reevaluate a key assumption we had when we began our impact evaluation. Specifically, we went into this impact evaluation expecting to see some kind of decay per year of giving. In our 2015 impact evaluation, we assumed a decay of 5% (and even this was criticised for seeming optimistic compared to EA Survey data â a criticism we agreed with at the time). Yet, what we in fact seem to be seeing is an increase in average giving per year since taking the Pledge, even when adjusting for inflation.
What you say is right: we agree there seems to be a decay in fulfilment /â reporting rates (which is what the earlier attrition discussion was mostly about) but we just add the additional observation that giving increasing over time makes up for this.
There is a sense in which we do disagree with that earlier discussion, which is that we think the kind of decay that would be relevant to modelling the value of the Pledge is the decay in average giving over time, and at least here, we do not see a decay. But we couldâve been clearer about this; at least on my reading, I think the paragraph I quoted above conflates different sorts of âdecayâ.
Really appreciate this analysis, Jeff.
Point taken that there is no clear plateau at 30% -- itâll be interesting to see what future data shows.Part of the reason for us having less analysis on the change of reporting rates over time is that we did not directly incorporate this rate of change into our model. For example, the table of reporting rates was primarily used in our evaluation to test a hypothesis for why we see an increase in average giving (even assuming people are not reporting are not giving at all). Our model does not assume reporting rates donât decline, nor does it assume the decline in reporting rates plateaus.
Instead, we investigated how average giving (which is a product of both reporting rates, and the average amount given conditional on reporting) changes over time. We saw that the decline in reporting rates is (more than) compensated by the increase in giving conditional on reporting. It could be that this will no longer remain true beyond a certain time horizon (though, perhaps it will!), but there are other arguably conservative assumptions for these long time-horizons (e.g., that giving stops at pension age, doesnât include any legacy giving). Some of these considerations come up as we discuss why we did not assume a decay in our influence and in our limitations of our Pledge model (in the bottom of this section, right above this one).
On your final point:
Separately, I think it would be pretty reasonable to drop the pre-2011 reporting data. I think this probably represents something weird about starting up, like not collecting data thoroughly at first, and not about user behavior? I havenât done this in my analysis above, though, because since Iâm weighting by cohort size it doesnât do very much.
Do you mean excluding it just for the purpose of analysing reporting rates over time? If so, that could well be right, and if we investigate this directly in future impact evaluations weâll need to look into what the quality/ârelevance of that data was and make a call here.
Thanks for your questions Jeff!
To answer point by point:
How does [the evaluationâs finding that Pledgers seem to be giving more on average each year after taking the Pledge] handle members who arenât reporting any donations?
The (tentative) finding that Pledgersâ giving increases more each year after taking the Pledge assumes that members who arenât reporting any donations are not donating.
How does reporting rate vary by tenure?
We include a table âProportion of GWWC Pledgers who record any donations by Pledge year (per cohort)â on page 48. In sum: reporting declines in the years after the Pledge, but that decline seems to plateau at a reporting rate of ~30% .
Was the $7,619 the average among [the 250-person sample we used for GWWC reporting accuracy survey] who recorded any donations, or counting ones who didnât record donations as having donated $0? What fraction of members in the 250-person sample recorded any donations?
The $7,619 figure is the average if you count those as not recording a donation as having donated $0. Unfortunately, I donât have the fraction of the 250-person sample who recorded donations at all on hand. However, I can give an informed guess: the sample was a randomly selected group of people who had taken the GWWC Pledge before 2021, and eyeballing the table I linked above, ~40-50% of pre-2021 Pledgers record a donation each year.
Where does the decline in the proportion of people giving fit into the model?
The model does not directly incorporate the decrease in proportion of people recording/âgiving, and neither does it directly incorporate the increase in the donation sizes for people who record/âgive. The motivation here is that â at least in the data so far â we see these effects cancel out (indeed, we see that the increase in donation size slightly outweighs the decrease in recording rates â but weâre not sure that trend will persist). We go into much more depth on this in our appendix section âWhy we did not assume a decay in the average amount given per yearâ.
ďGWWCâs 2020â2022 ImÂpact evalÂuÂaÂtion (exÂecÂuÂtive sumÂmary)
Thanks for writing this!
How much easier/âmore difficult do you think it would be to evaluate these interventions from a subject well-being point of view, like the kind HLI use?
My intuition is that these interventions might be undervalued when looking at effects in terms of the economic/âhealth outcomes that GW/âOP use, because I expect they miss a substantial amount of the benefits these interventions might bring.[1]
- ^
More exactly: any framework is going to only capture a fraction of the outcomes of any given interventions. I suspect that the benefits of interventions protecting against VAWG are going to have a smaller fraction of the benefits captured by the health/âeconomic outcomes GW/âOP use, than interventions like distributing bednets, cash transfers, and deworming. This is purely intuition though!
- ^
ďWhat are the best charÂiÂties to donate to in 2022?
ďLongterÂmism Fund: DeÂcemÂber 2022 Grants Report
Hi Ludwig, thanks for raising some of these issues around governance. I work on the research team at Giving What We Can, and Iâm responding here specifically to the claims relating to our work. There are a few factual errors in your post, and other areas Iâd like to add additional context on. Iâll touch on:
Our recommendations (we do disclosure conflicts of interest).
The Longtermism Fund specifically (payout reports are about to be published).
Our relationship with EVF (we set our own strategy, independently fundraise, and have little to do with most organisations under EVF).
#1 Recommendations
With respect to our recommendations: They are determined by our inclusion criteria which we regularly link to (for example, on our recommended charities page and on every charity page). As outlined in our inclusion criteria, we rely on our trusted evaluators to determine our giving recommendations. Longview Philanthropy and EA Funds are two of the five trusted evaluators we relied on this giving season. We explicitly outline our conflict of interests with both organisations in our trusted evaluators page.
We want to provide the best possible giving recommendations to our donors. Unfortunately, given we are very connected to the effective giving ecosystem â and as you highlighted, part of EVF â this is regularly in tension with avoiding conflicts of interest. We did our best this giving season to highlight these conflicts, and justify why we chose the evaluators we did, but we want to do better next year (we touch on this in our most recent announcement of our new research direction).
#2 The Longtermism Fund
The fund will disclose all of its spending in regular payout reports. Its first report will be released shortly (by the end of today! Itâs been in production over the past weeks).
As shared in our announcement of the fund, the fund is a collaboration between Giving What We Can and Longview. We (GWWC) are responsible for the communications around the fund; Longview are responsible for the grantmaking and research.
We also publicly committed to sharing reports outlining the funds grants in our announcement of the fund.
#3 Relationship with EVF
Giving What We Can initially helped create EVFâs predecessor (CEA) back in 2011, alongside 80,000 Hours â read more about its history here. In short, EVF currently provides GWWC with:
Operational support (e.g., finance, legal, HR) via EV Ops.
Board of Trustees (of which each organisation has historical had its own âActive Trusteeâ who has worked closely with the respective organisationâs leader on strategy and management).
Shared privacy policy (this facilitates a single sign-on for GWWC, EA Forum and EA Global).
Some limited shared communications and facilities (e.g., some shared Slack channels, Notion spaces, and access to Trajan Houseâthough nobody at GWWC currently uses this).
Importantly, GWWC independently:
Fundraises for its core expenses (i.e., we independently seek funding to pay for our staff and costs).
Sets its own strategy (we work as a team consulting GWWC members and other stakeholders to decide how we can have the most impact), does its own hiring, etc. See our most recent strategy update where we were seeking community feedback on our plans.
Independently choose its approach to giving recommendations â we receive no benefit for recommending organisations within EVF; historically, we err on the side of avoiding this due to perceived/âpotential conflicts of interest).
Happy to clarify any of the above.
- Keep EA high-trust by Dec 22, 2022, 2:58 PM; 156 points) (
- Dec 21, 2022, 1:40 AM; 73 points) 's comment on Bad Omens in curÂrent EA Governance by (
You might be interested in donating to the Patient Philanthropy Fund.
Itâs at the stage where it makes small grants (~1% of its total portfolio per year) but is primarily investing its funds with the aim of growing them.
I think fees make sense for investment funds because it increases their incentive to make a profit for their customers. But I donât think a straightforward fee for charitable funds would increase their incentive to have an impact (though perhaps it would increase their incentive to convince donors they are having an impactâbut this is still a âtrust based arrangementâ).
That said, I take your point about the problems with trust based arrangements! I feel in the ideal world, charitable funds are funded proportional to the quality of their grants. To some extent, this is what already happens (often these funds are themselves funded by a different funder after conducting some kind of evaluation), but itâs often not public. Iâm hoping that Giving What We Canâs work evaluating the evaluators will help provide additional accountability and help donors make a more informed choice about which funds to trust.
I agree that providing accountability to evaluators is a real challenge. I donât have much more to add right now, other than we really hope our work will help!
As for your last pointâat least from a simple expected-value perspective, Iâm not sure you should care too much about other lottery participantâs values. The idea is that by donating to the lotter, youâre not increasing the expected amount of money other participants influence. Of course, there could be other reasons to not want to participate in lotteries with people whose values you donât share.
Thanks for the thoughtful comment.
I think thereâs a strong theoretical case in favour of donation lotteries â Giving What We Can just announced our 2022/â2023 lottery is open!
I see the case in favour of donation lotteries as relying on some premises that are often, but not always true:
Spending more time researching a donation opportunity increases the expected value of a donation.
Spending time researching a donation opportunity is costly, and a donation lottery allows you to only need to spend this time if you win.
Therefore, all else equal, itâs more impactful (in expectation) to have a 1% chance of spending 100 hours to decide where $100,000 should go than it is to have a 100% chance spending 1 hour to decide where $1,000 to go.
And donation lotteries provide a mechanism to do the more impactful thing.
Some of these donât hold for many donors, and there are some additional considerations which undermine the value of lotteries:
Some donors may not feel confident that they can do much better with more time invested. They may even feel averse about the amount of money theyâd affect if they won(even if ex-ante they influenced $X either way). They stand less to gain from donations lotteries because of this.
Choosing to donate to a donation lottery is not costless. For example, it may take a similar amount of time/âresources to evaluate which fund they think is highest impact, as it would to understand and trust donation lotteries. This takes away some of the advantage of a donor lottery.
For some donors, thereâs there may be more advocacy potential in giving to a fund supported by a reputable evaluator, than a donation lottery.
Iâd like to flag that Iâm a little more reticent about putting too much weight on this consideration. Leaning too much into âadvocacy potentialâ (rather than just doing whatâs straightforwardly effective) seems slippery. But I think itâd be a mistake to ignore this consideration.
A substantial amount of our traffic comes from people who are completely unfamiliar with effective altruism (e.g.., people who just googled âBest charitiesâ or just used our âHow Rich Am I?â calculator) and I think funds are a better option for most of this audience (though perhaps for EA Forum users, itâs a different story, so I really appreciate pushback here!).
Overall, I think if Giving What We Can changed its default recommendation from funds to donation lotteries, weâd be having less impact.
Though we see funds as the best default option, we would like to provide additional guidance on when it makes sense to choose other options. Iâve made a small edit to the version of this post on our website to acknowledge that donor lotteries could be a compelling alternative. My sense is that donor lotteries would be a better option than funds for someone who:
Understands the arguments in favour of a donor lottery, and also the mechanisms for how it works.
Would be able to donate cost-effectively if they spent more time on their decision.
Would be able to spend that time in the event of winning.
I also have a few thoughts about this comment in particular:
For example, I think it would be healthy if funds were accountable to a smaller number of randomly selected donors who had the time to investigate more deeply, rather than spending <10% as much time and being more likely to pick based on a quick skim of fund materials and advertising/âsocial dynamics/âetc. And it seems like thereâs no way to escape from that regress by having GWWC evaluate evaluators, since then the donor must evaluate GWWCâs evaluations. From this perspective a donor lottery is really like a âfree lunchâ thatâs hard to get in other ways.
Speaking personally, Iâd also prefer fewer donors conducting deeper investigations of funds than a larger number conducting more shallow investigations. I think this is a very good consideration in favour of donation lotteries.
Speaking on behalf of Giving What We Can: though our work âevaluating the evaluatorsâ will inform our recommended funds and charities (to provide a stronger basis for our recommendations) we are also motivated to make it easier for donors to choose which evaluators and funds they rely on by providing resources on the values implicit in their methodology + pointing to some potential strengths/âweaknesses of their methodology.
Put another way, our vision for next year is to help:Provide strong default options for donors, with a reasonable justification for those defaults. (i.e., theyâre supported by a trusted evaluator who we investigated).
Provide the tools for donors to choose the best fund or charity given their values and worldview.
Really cool that you and your friend are meeting up on NYE to do this :)!
RE how to structure your thinking, Giving What We Canâs recommended charities page and the donation platform contains a few additional charities and funds. We also generally recommend giving via funds (though I think there are some benefits to trying to do your own research!).
Hi Vasco, great question :).
There are a few considerations that might be relevant here:
A lot here hinges on the extent to which donations to the LTFF are fungible with large funders (like Open Philanthropy). To the extent it does funge, then your donation might end up being as cost-effective as their last dollar, regardless of which year you give it.
Another point: the LTFF at all points likely funds everything above a certain âbarâ of cost-effectiveness. But that bar should change based on the best information at the time (i.e., the bar might lower when there is a lot of funding available; it might increase when thereâs not; it may also change depending on how âon fireâ the world appears to be). Iâm much less confident about this point, but it makes me think that, to the extent you trust the grantmakers to be well-informed, you shouldnât worry too much about the timing of your donation. They always have the option of saving itâI donât believe they have a requirement to disberse all their grants each year.
Hey Bruce, these are some great considerations!
The Patient Philanthropy Fund (PPF) is a fantastic option if you find the arguments behind patient philanthropy compelling. In my view, one of the biggest arguments against patient philanthropy is the idea that, in practice, you may fail to donate the money after all. I like that the PPF is removes yourself from the equation here. I also like that there are also (what seem to me to be) reasonable governance-mechanisms to ensure that the money will end up being donated.
That said, I donât have a strong view about the merits of patient philanthropy compared to giving now. You can read some of the arguments here. I (very tentatively) take the view that on the margin, philanthropists are already saving too much, and are failing to sufficiently scale up their giving. This makes me think that marginal patient philanthropy is less cost-effective than marginal donations. But⌠Iâm not sure this is the right way to think about this. There could be something different about the PPF (which is saving intentionally, and with an attempt to do so wisely) compared to most philanthropists who are saving more haphazardly.
You mentioned something elseâwhether to save some % now and give some % now. I think thatâs a good question. My hunch here is that itâs exceedingly unlikely that a mixed portfolio is maximising expected value. Happy to say more about this if youâre interested, but this has been a long comment already :) thanks for the great points.
No problem!
Regarding:
There was a typo in my answer before: (1 - ((1 â 1â6)^(1/â100)) = 0.0018) which is ~0.2% (not 0.2), and is a fair amount smaller than the discount rate we actually used (3.5%). Still, if you assigned a greater probability of existential risk this century than Ord does, you could end up with a (potentially much) higher discount rate. Alternatively, even with a high existential risk estimate, if you thought we were going to find more and more cost-effective giving opportunities as time goes on, then at least for the purpose of our impact evaluation, these effects could cancel out.
I think if we spent more time trying to come to an all-things-considered view on this topic, weâd still be left with considerable uncertainty, and so I think it was the right call for us to just acknowledge to take the pragmatic approach of deferring to the Green Book.
In terms of the general tension between potentially high x-risk and the chance of transformative AI, I can only speak personally (not on behalf of GWWC). Itâs something on my mind, but itâs unclear to me what exactly the tension is. I still think itâs great to move money to effective charities across a range of impactful causes, and Iâm excited about building a culture of giving significantly and effectively throughout oneâs life (i.e., via the Pledge). I donât think GWWC should pivot and become specifically focused on one cause (e.g., AI) and otherwise Iâm not sure exactly what the potential for transformative AI should imply for GWWC.