Where I Am Donating in 2016
Part of a series for My Cause Selection 2016. For background, see my writings on cause selection for 2015 and my series on quantitative models.
Introduction
In my previous essay, I explained why I am prioritizing animal advocacy as a cause area. In this essay, I decide where to donate. I share some general considerations, briefly discuss some promising organizations I did not prioritize, and then list my top candidates for donation and explain why I considered them. I conclude with a final decision about where to donate.
This year, I plan on donating $20,000 to the Good Food Institute (GFI), which primarily works to foster the development of animal product alternatives through supporting innovation and promoting research. I believe it has an extraordinarily large expected effect on reducing animal consumption and contributing to improving societal values.
My writeup last year persuaded people to donate a total of about $40,000 to my favorite charities; if I move a similar amount this year, I believe GFI will still have substantial room for more funding even after that.
I will donate a few weeks after publishing this, so you have some time to persuade me if you believe I should make a different decision. Another donor plans to contribute an additional $60,000 AUD (~$45,000 USD) to GFI and is also open to persuasion.
This essay builds on last year’s document. Usually, unless I say differently here or in one of my previous writings, I still endorse most of the claims I made last year. Last year, I discussed my fundamental values and my beliefs about broad-level causes plus a handful of organizations, so I will not retread this ground.
Contents
General Considerations
In last year’s cause selection essay, I wrote a substantial section on general considerations that discussed my background beliefs and motivation. Rather than repeat what I’ve written there, I invite you to read it if you wish to understand more of the background behind this essay. Here, I will elaborate on new considerations and how my process has changed since last year.
My process
Based on what I learned last time, I am doing a few things differently this year.
Fairly early on, I decided to prioritize animal advocacy and only looked at charities within this cause area. Last year, I leaned toward AI safety as the top cause but still looked into charities in many other causes in order to delay the decision about which general area to support. But this didn’t actually make it any easier to pick a cause, so I do not believe this was a good use of time. This year, I still have lots of uncertainty about which cause area looks best, but I don’t expect that I will be able to reduce that uncertainty by investigating charities in lots of cause areas. So I decided to focus exclusively on animal advocacy; I explain why in a previous essay.
I will rely heavily on my quantitative model and quantify my decision-making as much as possible. I’ve struggled to appropriately quantify a few important parts of my decision inputs, so I consider these independently. Most significantly, I consider room for more funding and learning value as separate factors because I have not found a good way to quantitatively model these.
On cause prioritization
In last year’s document I summarized my thoughts on many causes. Since then, I have written about how:
We have better feedback loops for values spreading than for GCR reduction
Global poverty charities are more speculative and uncertain than people usually claim
We can and should take some expected value estimates literally
This year, I decided to prioritize animal advocacy. My essay on the subject explains why I wanted to focus on a specific cause and why I tentatively expect animal advocacy to be the most impactful. I have a lot of uncertainty here and I can think of plenty of good reasons to prioritize existential risk reduction instead.
Modeling problems
Like I said before, quantitative models are dumb. I only use them because not using quantitative models is even dumber.
My quantitative model has some serious problems, mostly related to how to reason about priors and posteriors. Because I do not know how to resolve these problems, I want to raise them so people understand the model’s shortcomings and have the opportunity to suggest improvements.
The post-posterior problem
Suppose I want to estimate the impact of REG, which raises money for effective charities. I do this by adding up the expected value of all the charities that REG raises money for and dividing by its budget. When I’m adding up the expected values for different charities, I use their posterior expected values instead of my naive estimates because I care about their “true” expected value. But then what if I also want to calculate the posterior for REG? Now I’m computing a posterior of a posterior, which seems wrong.
Another problem: if I adjust expected value calculations for room for more funding before taking the posterior, then room for more funding barely matters. So instead I factor in room for more funding after taking the posterior. But that’s a pretty unprincipled decision: why should room for more funding get a special post-posterior status when no other inputs do?
The direct vs. far-future effects problem
While writing this, I wrote an expected value calculation for the Good Food Institute to compare against my existing calculation for vegetarian outreach (as done by organizations like Mercy for Animals (MFA)). According to my estimate, GFI has a higher expected value but also a higher variance. My model suggests that GFI has a higher posterior than veg outreach for its direct effects (i.e., immediately reducing animal suffering), but a lower posterior for its far future effects (i.e., shifting toward a future world with less suffering on a large scale).
That doesn’t make any sense. According to my model, these interventions’ effects on reducing suffering in the short term strongly correlate to how they affect suffering in the long term.1 That means the way I use a prior distribution and then update based on evidence substantially differs from how the world really works.
I tend to believe that I can trust the posterior for direct effects more than I can trust the posterior for far-future effects, so if GFI looks better in the short term then I should expect it to have greater effects on the far future as well.
My writing process
I wanted to complete this writeup earlier in the year, but it ended up taking longer than expected (in terms of calendar time, not actual time spent working—I worked on it less frequently than I had initially anticipated). I prefer not to donate close to the end of the year. When organizations receive a large chunk of their donations all at once in December, it makes it harder for them to plan their annual expenses because they cannot easily predict how much funding they will have. I try to balance this out by donating earlier. Unfortunately, I did not do that this year.
Here’s about how long it took me to decide where to donate:
10 hours thinking about and writing my essay on which cause to prioritize
5 hours narrowing down a list of finalists
10 hours talking to my finalists
5 hours building quantitative models
10 hours on this writeup
To compare my finalists, I considered what I needed to know to build a reasonable quantitative model. Then I wrote a list of questions to ask the organizations that would allow me to complete my model.
Some examples of questions I asked REG:
What have you done in daily fantasy sports and finance since last year?
Given that you’ve already spent some time in the poker space and picked a lot of low-hanging fruit, how do you expect your fundratio will change?
What are your future plans?
What will marginal donations be used for?
Do increased salaries for REG employees translate into more hours worked?
Some examples of questions I asked GFI:
Why are you trying to do so many things at once rather than start with a narrow focus?
What do you to do support startups?
Of the companies that you’ve helped form, what role did you play in forming them?
How much are you bottlenecked by hiring?
How would you use marginal funding?
Acknowledgments
Thanks to Linda Neavel Dickens, Eitan F., Jake McKinnon, Kelsey Piper, and Buck Shlegeris for providing feedback on my work.
Organizations
Global catastrophic risk organizations
I have not spent much time looking into organizations focused on reducing global catastrophic risks (GCRs) because I wanted to narrow my scope. But I would be remiss not to talk about GCRs at all.
I still believe, for the reasons I gave last year, that we should prioritize AI safety over other GCRs. It looks like one of the most probable and most devastating risks, and gets comparatively little attention. Among AI safety organizations, I’m still partial toward MIRI. As of this writing, MIRI has a long way to go to meet its fundraising target, so if I were to fund a GCR organization, I would probably fund MIRI.
Other organizations
I know of a handful of other organizations that might be highly effective, yet don’t have a strong sense of whether their work is valuable. They look sufficiently unlikely to be the best charity so I didn’t think they were worth investigating further at this time. I have included a brief note about why I’m not further investigating each charity. I discussed several such organizations last year as well.
Sentience Politics
Sentience Politics describes itself as “an antispeciesist political think tank.” It opposes factory farming and also advocates for some less popular issues like wild animal suffering and the importance of the far future. I’m fairly optimistic about how much good Sentience Politics will do, but I did not seriously consider it for donation because I believed it would be dominated by either Mercy for Animals (MFA) or the Good Food Institute. MFA has stronger evidentiary support and a better track record, and GFI looks like it has a better chance of producing low-probability, high-value outcomes.
Animal Ethics
Animal Ethics focuses on wild animal suffering and other particularly important and neglected areas. At one point, no other organizations were doing work on wild animal suffering, although now there exist a few others paying some attention to it.
Animal Ethics looks potentially promising, but I decided not to seriously investigate it because I did not believe I would be able to find evidence that would convince me to donate to it. It does not have much of a track record and I do not see clear evidence one could point to about why Animal Ethics is or is not effective.
New Harvest
New Harvest does research on cellular agriculture to develop clean meat and other products. New Harvest and Good Food Institute both work on supporting new food technologies that will reduce animal suffering. New Harvest could be a great place to donate, but I did not look into it much. Based on cursory examination, I believed the Good Food Institute’s model looked better, and I have more confidence that the people behind GFI know what they’re doing and will make choices that do the most good. I do not know much about cellular agriculture, so I do not believe I can effectively assess whether New Harvest has made good progress.
Finalists
Animal Charity Evaluators (ACE)
ACE creates value in three main ways.
It produces better top charity recommendations.
It persuades people to donate to its top charities.
It persuades organizations to focus on more effective interventions2.
At first glance, ACE appears primarily oriented around #1, but I expect that #1 has the smallest effect of these three—I tend to think that more top charities research won’t allow ACE to find substantially more effective top charities, especially considering the fairly limited scope of the research space. But ACE does do plenty of the other two activities, as well. For example, in ACE’s recent report on online ads, ACE claimed that it does not recommend ads as an intervention and prefers corporate outreach and undercover investigations. This sort of report probably has a reasonably good chance of persuading effective animal organizations to change their focus3.
If ACE persuades people to donate to its top charities who otherwise would have given to something much less effective, this matters a lot, and it matters in the same straightforward way that fundraising charities like REG matter. Perhaps it’s surprising initially that the biggest effect of a research organization may come from its ability to persuade people to donate, but this does not seem unreasonable upon further consideration. Probably, GiveWell has done much more good recently by persuading readers to donate than by producing better recommendations (assuming its recommended charities are as good as it claims, of which I am skeptical4).
ACE also could have a positive effect by persuading organizations to shift their operations toward more effective interventions. The obvious way to try to assess ACE’s impact is to ask animal charities if they pay attention to ACE’s intervention reports or if they have shifted their priorities as a result of anything ACE has done. A first step would be to speak to some charities in this space and ask them if they pay attention to ACE’s recommendations.
ACE probably does have at least some positive effect via persuading people to donate more to better charities and via shifting organizations toward more effective interventions. But I don’t have a good sense of how much good ACE does, and I believe it would require a substantial time investment to find out.
It’s plausible that donations to ACE do more good than donations to ACE top charities, but I’m not confident enough about that to donate to ACE over them. That said, it’s a close call and I see a reasonable probability that I could change my mind.
Room for more funding
ACE looks fairly constrained by funding—if it had the budget to hire more people, ACE would do more top charities research, intervention outreach, talking to the press, and some other activities. I expect that more funding would allow ACE to scale up these activities. In some cases (such as for top charities research), marginal work won’t be as valuable as past work, but I expect some of ACE’s new work not to see much diminishing marginal utility. In general, I would say that ACE has considerable room for more funding and would have no qualms about funding it on this basis.
Good Food Institute (GFI)
The Good Food Institute attempts to support the development of new food technologies that will hasten the end of factory farming. Its primary activities include promoting research, supporting startups in the food space, engaging corporations, and campaigning to increase R&D in this field.
Cost-effectiveness estimate
Let’s consider three potential GFI routes to impact.
Accelerating the development of clean meat (a.k.a. cultured meat, that is, meat grown from cell cultures rather than taken off a dead animal’s body)
Supporting food startups that displace factory farms
Expanding and improving plant-based foods at restaurants and grocery stores through corporate engagement
My quantitative model provides my estimates for these, and the backend details the exact calculations used. For brevity, I will not explain all my calculations, but I will provide reasoning for a few inputs.
I don’t claim that my numbers have anything resembling a high level of accuracy. I’m working off the theory that pulling numbers out of your ass and building a model with them is better than pulling a decision out of your ass.
Years clean meat accelerated by GFI per year: To know how much GFI pushes forward the development of clean meat, we essentially want to know what share of clean meat development GFI is responsible for. Previously, GFI has worked to support two startups (that I know of) working on clean meat and probably will continue to play a non-trivial supporting role for clean meat companies. GFI also could encourage biotechnology researchers to focus on cultured animal products. GFI claims5 that many researchers would be interested in working on clean meat but simply don’t know the space exists.
Proportion of startup success attributable to GFI: GFI claims credit for the creation of several startups—GFI expressed belief that they would not have existed if GFI had not connected the founders and made them aware of the open spaces within food technology. I’m skeptical of this claim; it’s difficult to say with any certainty why a company launched and what would have happened otherwise. That said, GFI has relatively strong evidence given the limitations on the claims we can make about this sort of thing. It organized regular meetings between potential founders and introduced many of them through these meetings; it also researched what new areas look most promising, which probably helped the founders to identify their focus. If GFI has had past success in bringing together startups, then this gives good reason to believe that it will continue to do so.
Additionally, GFI plays a supporting role for startups by assisting with business plans, marketing, and other areas where some entrepreneurs tend to be weak. The people I spoke with at GFI claimed they care about what companies could do without them, and they want to disproportionately focus on helping companies do what they wouldn’t do well on their own (this primarily translates to helping companies get off the ground in their early stages). When I asked them about the value they provide to startups, they said they were interested in understanding their impact. They asked if I had any ideas about how to measure the effects of their work; I thought this was a good sign. GFI appears unusually effectiveness-minded; the founders started it because they believed its activities would be the most effective things to do. And unlike most effectiveness-minded organizations6, GFI was founded by people with a lot of experience. This adds some qualitative credibility to GFI.
These estimates suggest that GFI has (in expectation) a substantially bigger effect on reducing short- to medium-term suffering than any other organization I’ve studied.
GFI exposes some weirdness in my quantitative model: it looks better than traditional animal advocacy in the short run, but worse on far future effects. This is not a result of differing inputs on short-term versus far-future calculations, but happens because the way I use priors doesn’t quite correspond to reality. Despite this obvious flaw, I haven’t come up with any better way to use priors. Given that GFI looks better on alleviating short-term suffering than vegetarian advocacy, I should expect that it has better far-future effects as well.
GFI probably won’t affect people’s values in all the same ways that vegetarian advocacy would because it largely operates on the producer side rather than the consumer side. In this respect, it behaves similarly to corporate outreach. Like with corporate outreach, we can estimate its effects on values spreading by looking at how people become more empathetic toward animals when they stop eating them. I won’t get into that here, but there exists good reason to believe that there’s a reasonably strong effect. (Lewis Bollard of the Open Philanthropy Project believes that such an effect exists.)
Room for more funding
As of when I spoke with GFI (in mid-September), it had raised about $1.5 million in funding and had a goal of $2.6 million for the year.7 GFI is working on a lot of significant challenges, their mission being “to create a healthy, humane, and sustainable food supply”; and I expect that it could probably scale up beyond its current hiring plan, although that might take more than a year—GFI, like any other organization, can only hire new people at a fairly limited pace if it wants to maintain high standards8. GFI plans to hire more directors within a year, which should make hiring and onboarding easier.
GFI looks comparatively more skilled at fundraising than my other finalist organizations. I consider this a counterpoint against funding GFI; it means marginal funding has a lower chance of substantially increasing GFI’s actual income. However, I do not feel particularly concerned about this—the more money GFI has now, the less effort it has to spend on fundraising and the more it can spend on its mission.
Learning value
GFI does something different from any existing animal organization, which means we have more potential to learn from GFI’s activities than we otherwise would. I see value in helping GFI continue and grow to learn more about what it can accomplish.
But didn’t Open Phil say clean meat wasn’t going to work?
My cost-effectiveness estimate finds that GFI does a lot of good in expectation by accelerating the development of clean meat. But the Open Philanthropy Project’s writeup on clean meat claimed that it would not become cost-effective any time soon. I’m not particularly knowledgeable about the science here, but I believe Open Phil is mistaken.
First: Some people with strong backgrounds in cellular biology are working on developing clean meat. These people know more than I do and they know more than Open Phil does, and they would not spend time on this if they did not believe it would produce any useful results. I believe this is the strongest argument possible—I will never know as much as these scientists do about clean meat, and neither will anyone at Open Phil9.
Second: Open Phil’s arguments have some flaws and gaps in reasoning. Its core claim is that clean meat costs too much, particularly because medium is too expensive. In a few cases, it briefly raises ways that prices could be driven down, but then essentially says, “We have not investigated this,” and implicitly assumes they won’t work. Its section on cost-effectiveness estimates looks at three back-of-the-envelope calculations, two of which are fairly dated and the third of which Open Phil and I agreed was not accurate. These provide only weak evidence about the potential cost-effectiveness of clean meat.
Third: I know some of the people involved in clean meat work, and they have shared some evidence with me about the viability of the field that they are not ready to make public. I know this does not help readers much, but I want to at least be transparent about the fact that I have reasons other than the ones above for believing that clean meat will be feasible in the near future.
Mercy for Animals (MFA)
Mercy for Animals represents the normal, respectable effective animal advocacy organization on my list of finalists. I was fairly indifferent between MFA and ACE’s other top charities; all three charities (MFA plus The Humane League and Animal Equality) work in similar spaces and have similar activities. I lean weakly toward MFA for the following reasons.
I’m most convinced that its leaders are highly competent and effectiveness-minded.
It appears best poised to scale up its activities, especially in the event that someone decides to give it lots of money.
ACE has already reviewed MFA and I don’t have much to add. Animal advocacy groups look highly valuable, and MFA looks like as good an animal advocacy group as any.
Raising for Effective Giving (REG)
Last year I wrote substantially about REG, and in the end decided to donate to it. I decided to donate on the basis of its ability to fundraise for other charities that I considered relatively effective; I expected that a donation to REG would produce more good than a donation to any individual direct charity via REG’s ability to raise money.
Since my donation last year, REG has continued to move money with similar efficiency. It has more recently been moving money toward charities that I tend to like better, so in fact it has a higher weighted money moved than it used to. I believe this happened in part due to randomness and in part because the REG employees have been trying to push donors toward what they see as more effective cause areas, and I largely agree with the REG employees about which causes matter most.
REG’s achievements and future plans
Now that REG is more established, it has plans to create routines that will allow it to continue to raise money from poker players, such as maintaining a presence at the World Series of Poker. I find it fairly likely that this will draw in substantial donations for effective charities because REG has experience raising money from poker players and probably can continue to do so. REG’s fundraising multiplier may increase or decrease: it could decrease if the low-hanging fruit in the poker space has already been picked10, or it could increase if REG becomes more efficient at raising money with less effort.
A year ago, REG had plans to expand into daily fantasy sports (DFS) and finance. Soon after REG started entering DFS, the field encountered some increased scrutiny, so REG didn’t do much in this area.
REG has plans in motion to get donations from people in finance, including a charity poker tournament. I expected REG to have more accomplished in finance than it does at this point. REG wants to focus more on finance and make use of some of those relevant contacts. I don’t know how lucrative this will prove—and neither does REG. The REG team told me they focus on high-expected value bets that might not pay off. I generally agree that this is a good idea, but it does pose a potential problem for REG. If REG pursues a high-variance strategy that doesn’t pay off, its lack of success could turn off donors, which makes it harder for REG to take risks. But on the other hand, how do donors know exactly why a charity fails to perform as well as it used to, and is it unreasonable for them to withdraw funding? I don’t know how to resolve this.
In spite of REG’s lack of success at expanding into DFS and its relatively slow pace moving into finance, it did maintain a fundraising ratio of roughly $10 raised per dollar spent, and I expect it will continue to do so (with some variance).
Room for more funding
REG plans on spending $120K in 2017, and expects to raise about $40-80K. Funding beyond $120K would free up time so that employees can spend less time ensuring REG’s financial security and more time raising money for effective charities.
Relatedly, REG has had some difficulty finding people with the right skill-set to reach out to poker players. So if you’re good at that sort of thing, you might consider working for REG.
I see REG as a riskier bet than some other charities like Mercy for Animals. But I’m comfortable with risk, and if I were to donate based just on what I’ve said so far, REG would probably get my money. That said, I have one major concern with donating to REG: fungibility.
REG operates under the umbrella of the Effective Altruism Foundation (EAF) and receives some funding from it. I’m concerned that if I donate more to REG, that means REG will receive less money from EAF; so donations to REG are effectively donations to EAF.
I believe EAF’s other activities have a lot of value in expectation, although I’m less confident about them than I am about REG. So this substantially weakens my estimate of the value of marginal dollars donated to REG. I still believe REG looks good, but this weakens the case for it.
Conclusions
The four finalist organizations I chose all look strong. In the course of learning about them, I repeatedly changed my mind about which I liked best, and for each organization I had some period where I thought it was the most likely donation target. I would not discourage anyone who wanted to donate to any of my four finalists: ACE, GFI, MFA, or REG.
Ultimately, I decided that the Good Food Institute looks strongest. Here’s a brief qualitative summary of my reasoning:
ACE and MFA look similarly good. It’s harder to tell how much good ACE does, but it potentially has higher leverage than MFA.
REG has done a great job of maintaining its fundraising ratio (better than I expected), but fungibility concerns count against it.
GFI appears to have a higher expected value than any of the charities REG fundraises for.
GFI looks riskier than MFA but has a much higher expected value, so I believe it’s worth the risk.
By transitivity, GFI looks better than ACE.
From a more quantitative perspective, my calculations suggest that GFI has the highest posterior expected value (notwithstanding the direct vs. far-future effects problem). It’s hard to describe my intuitions about why I favor GFI, but my quantitative model does a reasonable job of codifying these intuitions.
How to Donate
You can donate to the Good Food Institute directly through GFI’s website.
If you want to donate to one of my other finalists, ACE and MFA have donation pages as well. To donate to REG, if you live in the United States, you can donate through GiveWell to make your donation tax-deductible. For other countries you can donate through the website—make sure in the “Select Charity” section, you choose “Support REG” so your donation goes to REG instead of to a charity that REG supports.
Things that could change my mind
Donations to REG don’t actually substantially displace funding from EAF, and REG’s fundraising provides enough leverage to make it look better than GFI.
REG raises enough money for GFI that donating to REG looks better for GFI than donating directly to GFI.
My expected value calculations for GFI are overly optimistic or insufficiently high-variance.
I did not correctly interpret the results of my quantitative model.
There’s good reason to believe that GFI’s activities have a much weaker effect on spreading good values than things like what MFA does.
Notes
-
I do believe GFI looks comparatively worse in the long-term. Reducing factory farming by persuading people that it matters should have greater long-term effects than reducing factory farming by making it convenient to eat alternative products. But this difference cannot explain why GFI looks better on direct effects and worse on far-future effects, because even if you assume GFI is just as good at shifting values as veg outreach, you still see the same inconsistency. ↩
-
When reading a draft of this section, an ACE representative suggested that ACE has a fourth route to impact. ACE does work on building the effective animal activism community, which could help individuals work together better and adopt more effective practices. ↩
-
The Open Philanthropy Project’s grants could have a similar effect. Open Phil’s grants already roughly equal the size of ACE’s money moved and will probably grow in the future, so they should provide reasonably strong incentives. I find Open Phil’s ideas about cause prioritization within the farmed animal advocacy space pretty mysterious so I don’t know if Open Phil will have a particularly valuable persuasive effect. I’m more confident that it will now that its prioritization decisions are primarily being made by Lewis Bollard—a person who gives animal causes proper consideration and has experience in the space. ↩
-
For one, I don’t believe one can justify the population ethics stance necessary for AMF to look as good as GiveWell says. More significantly, I’m not convinced that reducing global poverty is a good thing: it has lots of side effects, some of which are really good and some of which are really bad. ↩
-
When I say that GFI claims something, I do not mean that this is the official stance of the company, but that a company representative made this claim in a personal communication. ↩
-
I do not necessarily endorse the contents of the linked article. ↩
-
The Open Philanthropy Project’s grant had already been made at this time. The quoted numbers represent my most up-to-date understanding at the time of this publication. I believe that Open Phil did not fill all of GFI’s room for more funding (RFMF) because (1) it has made many grants that look like they do not fill the grantee’s RFMF and (2) Open Phil and GiveWell have a history of overly conservative RFMF estimates. That’s only a brief explanation; I may elaborate if this becomes a sticking point for some people. ↩
-
The GFI staff I spoke with elaborated that GFI is more constrained by onboarding than by hiring, because it has to spend considerable effort training new employees. ↩
-
In 9th and 10th grade, I spent a lot of time arguing on the internet with climate change skeptics, which required me to learn a lot about climate science. In spite of spending dozens of hours researching the science, I still knew less than some climate skeptics who could argue circles around me, and at one point I became somewhat convinced that humans were not responsible for global warming. But even when I reached that point, I still could not deny that the overwhelming majority of climatologists believed climate change was real. Ultimately, I decided that I shouldn’t form my beliefs based on my knowledge of the science, because I would never know as much as climate scientists. Instead, I should defer to the experts. Open Phil’s current state of knowledge on tissue engineering resembles my knowledge of climate science—far more than the average person, and detailed enough to make you think you know what you’re talking about, but still nowhere close to what a professional knows. ↩
-
I said the same thing last year, but REG’s fundraising multiplier did not change much between last year and this year. ↩
- 6 Jan 2017 12:52 UTC; 4 points) 's comment on Where I gave and why in 2016 by (
I appreciate your taking the time to write up your decision process again, Michael. As you have said, by making the process more explicit it makes it easier for others to check and contribute to the process, and produces knowledge that others can use as a public good.
In this case I think the model you are using suffers from some serious flaws, both in the model structure and in the parameter values. I will discuss modifications to the parameters in your app, and readers may wish to open that alongside this comment to examine sensitivities.
To roadmap:
I think you are using a prior that makes false predictions and Charity Doomsday Arguments
The results are extremely sensitive to minor disturbances in variance that are amplified by your prior and modeling choices
In particular, the model frequently treats very good news as very bad news
I see multiple dubious parameter choices that switch the charity recommended by the model for long-run impact
Here’s one example of perverse behavior. Under the GFI model, you give a credence interval for the number of people who would switch to cultured meat if it were available, with a 10th percentile of 500 million and a 90th percentile of 2 billion; but if we raise the 90th percentile to 6 billion people (most of the world population), the posterior value of GFI falls (from 9375 to 1855 to in the ‘direct effects’ section, and from 2.6e+33 to 9.1e+32 in long-run effects).
Evidence that leads us to update to having the same 10th percentile and a higher 90th percentile is good news about GFI, but in the model it drops the value of GFI by 3-5x, enough to change the model’s recommendation away from GFI. And if meat substitutes eventually turn out to be cheaper, healthier, better for the environment, and avert the cruelty of factory farming why think 6 billion is too high for the 90th percentile? If we also allow for a 10th percentile of 100 million the values fall further to 138 (vs 9375 to start, an ~68x decline) and 1e+32 (a 25x decline).
Why does this happen? You have input a prior distribution over cost-effectiveness that assigns vanishingly low and rapidly decreasing probabilities to very large impacts. In particular it penalizes impacts on the long-run future of civilization by 15-20 orders of magnitude. In other words, it might appear that we know a lot about the rate at which dinosaur-killer asteroids impact the Earth (based on craters, meteor remnants, astronomical observations, etc), and that we were able to set up an asteroid watch system to give us sufficient advance warning to deflect such an asteroid for $100MM. The scientific evidence seems to indicate pretty clearly that we can track and stop asteroids, but your prior says with overwhelming confidence that we can’t do so if preventing a dinosaur-killer hitting the Earth would have very large benefits. On the other hand, if we knew we were about to be wiped out by something else, then we could believe the astronomers, since it wouldn’t be that good to stop asteroids.
Likewise with GFI and vegan promotion. In your model, the more long-run flow-through effects of getting people to stop patronizing factory farms, the more your prior predicts it is essentially impossible to convert someone to veganism or develop cultured meat, since it would be too beneficial. This Charity Doomsday Argument lets us make all sorts of empirically wrong predictions about the world we live in.
Essentially the prior in the model acts like an adversarial agent: whenever there is any uncertainty about anything that could affect cost-effectiveness the model overwhelmingly favors the worst case. Good news about the upper bound reach of GFI makes your model consider it much worse because you treat this as evidence of increased variance and of the extreme worst case being much worse. Many of the conclusions generated by your app stem from artifacts of this effect.
In addition, I continue to find many of the parameters problematic.
For example, in the model most of the value of GFI comes from advanced civilizations converting resources into well-being at peak efficiency (ecstatic dense AIs, etc); setting the probability of hedonium and dolorium to 0 lowers the posterior for GFI to 8e+20 from 2.6e+33. In the model GFI affects this by developing meat substitutes, and people who do not eat meat are then more likely to produce hedonium and avoid producing dolorium. But any society capable of interstellar colonization and producing hedonium would be perfectly capable of producing meat without animals, or powering themselves and enjoying sensory pleasures without either. Earlier availability would be relevant for the shape of later periods only insofar as the later availability doesn’t substitute. The model does not appear to adequately account for this (although it has some relevant parameters).
We could shoehorn this consideration into the ‘memetically relevant humans’ parameter under Veg Advocacy, which currently has a 10th percentile of 1 billion and 90th percentile of 2 billion. If we increase the 90th percentile there to 200 billion to reflect future generations having more of the control over the beings they create, that alone switches the recommendation of your model away from GFI.
[Disclosure: I work at the Future of Humanity Institute, and consult for the Open Philanthropy Project. I have previously worked at the Machine Intelligence Research Institute (and my wife is on its board), and volunteered and consulted for the Center for Effective Altruism, particularly 80,000 Hours. I am writing only for myself.]
Also, it would be helpful if you said more about how you think I should do things. Should I not use a Bayesian prior at all? Should I use a wider prior or a different distribution? Should I model interventions in a different way? How do you think I could do better?
Right now all I know is that any approach has lots of problems, and my current approach seems the least problematic. If you think something else would be better, please say what it is and why you prefer it.
Compared to this, I would use something that looks more like standard cost-effectiveness analysis. Rather than use the Doomsday Prior and variance approach to assess robustness (which is ultra-sensitive to errors about variance for ranking options, at least comparably to errors about EV in cost-effectiveness analysis) my recommendationsI would include the following:
Do multiple cost-effectiveness analyses using different approaches, methodologies, and people (overall and of local variables); seek robustness in the analysis by actually doing it differently
Use empirical and intuitive priors about more local variables (e.g. make use of data about the success rates of past scientific research and startups in forming your prior about the success of meat substitute research, without hugely penalizing the success rate because the topic has more utilitarian value by your lights) rather than global
Assess the size of the future separately from our ability to prevent short-run catastrophic risks or produce short-run value changes (like adding vegans)
Focus on relative impact of different actions, rather than absolute impact in QALYs (the size of the future mostly cancels out, and even a future as large as Earth’s past is huge relative to the present, hundreds of millions of years; actions like cash transfers also have long-run effects, although a lot less than actions optimized for long-run effects)
Vigorously seek out criticism, missing pieces, and improvements for the models and their components
Part of what I’m getting at is a desire to see you defend the wacky claims implicit in your model posteriors. You present arguments for how the initial estimates could make sense, but not for how the posteriors could make sense. And as I discuss above, it’s hard to make them make sense, and that counts against the outputs.
So I’d like to see some account of why your best picture of the world is in so much tension with your prior, and how we could have an understanding of the world that is consistent with your posterior.
Thanks, this is exactly the sort of thing I was looking for.
Slightly unrelated but:
The wacky claims you’ve talked about here relate to far-future posteriors. Do you also mean the direct effect posteriors imply wacky claims? I know you’ve said before you think the way I set a prior is arbitrary, is there anything else?
I generally agree with this. My model has lots of problems and certainly shouldn’t be taken 100% seriously, Perhaps I should have said more about this earlier but I wasn’t really thinking about it until you brought it up, so it’s probably good that you did.
It sounds like you’re raising 2-3 main issues:
Posteriors have weird behavior. 1a. Increasing the upper bound on a confidence interval reduces the posterior. 1b. Far future effects are penalized in a way that has some counterintuitive implications.
The inputs for GFI’s far future effects look wrong.
For (2), I don’t think they’re as wrong as you think they are, but nonetheless I don’t really rely on these and I wouldn’t suggest that people do. I could get more into this but it doesn’t seem that important to me.
For (1a), widening a confidence interval generally makes a posterior get worse. This sort of makes intuitive sense because if your cost-effectiveness estimate has a wider distribution, it’s more likely you’re making a mistake and your expected value estimate is too high. Maybe the current implementation of my model exaggerates this too much; I’m not really sure what the correct behavior here should be.
For (1b), this is a big problem and I’m not sure what to do about it. The obvious other thing to do is to essentially take claims about the far future literally—if calculations suggest that things you do now affect 10^55 QALYs in the far future, then that’s totally reasonable. (Obviously these aren’t the only things you can do, but you can move in the direction of “take calculations more seriously” or “take calculations less seriously”, and there are different ways to do that.) I don’t believe we should take those sorts of claims entirely seriously. People have a history of over-estimating cost-effectiveness of interventions, and we should expect this to be even more overstated for really big far future effects; so I believe there’s a good case for heavily discounting these estimates. I don’t know how to avoid both the “affecting anything is impossible” problem and the “take everything too seriously” problem, and I’d be happy to hear suggestions about how to fix it. My current strategy is to combine quantitative models with qualitative reasoning, but I’d like to have better quantitative models that I could rely on more heavily.
So, how should this model be used? I’m generally a fan of cost-effectiveness calculations. I calculated expected values for how MFA, GFI, and ACE directly reduce suffering, and my prior adjusts these based on robustness of evidence. The way I calculate posteriors has problems, but I believe the direct-effect posteriors better reflect reality than the raw estimates (I have no idea about the far-future posteriors). If you don’t like the way I use priors, you can just look at the raw estimates. I wouldn’t take these estimates too seriously since my calculations are pretty rough, but it at least gives some idea of how they compare in quantitative terms, and I believe it doesn’t make much sense to decide where to donate without doing something like this.
I think you would benefit a lot from separating out ‘can we make this change in the world, e.g. preventing an asteroid from hitting the Earth, answering this scientific question, convincing one person to be vegan’ from the size of the future. A big future (as big as the past, the fossil records shows billions of years of life) doesn’t reach backwards in time to warp all of these ordinary empirical questions about life today.
It doesn’t even have much of an efficient market effect, because essentially no actors are allocating resources in a way that depends on whether the future is 1,000,000x as important as the past 100 years or 10^50. Indeed almost no one is allocating resources as though the next 100 million years are 10x as important as the past 100 years.
The things that come closest are generic Doomsday Arguments, and the Charity Doomsday Prior can be seen as falling into this class. The best cashed out relies on the simulation argument:
Let the size of the universe be X
Our apparent position seems to let us affect a big future, with value that grows with X
The larger X is, the greater the expected number of simulations of people in seemingly pivotal positions like ours
So X appears on both sides of the equation, cancels out, and the value of future relative to past is based on the ratio of the size and density of value in simulations vs basement reality
You then get figures like 10^12 or 10^20, not 10^50 or infinity, for the value of local helping (multiplied by the number of simulations) vs the future
Your Charity Doomsday Prior might force you into a view something like that, and I think it bounds the differences in expected value between actions. Now this isn’t satisfying for you because you (unlike Holden in the thread where you got the idea for this prior) don’t want to assess charities in their relative effects, but in absolute effects in QALY-like units. But it does mean that you don’t have to worry about 10^55x differences.
In any case, in your model the distortions get worse as you move further out from the prior, and much of that is being driven by the estimates of the size of the future (making the other problems worse). You could try reformulating with an empirically-informed prior over success at political campaigns, scientific discovery, and whatnot, and then separately assess the value of the different goals according to your total utilitarianish perspective. Then have a separate valuation of the future.
I’m not sure I fully understand what you are relying on. I think your model goes awry on estimating the relative long-run fruit of the things you consider, and in particular on the bottom-line conclusion/ranking. If I wanted to climb the disagreement hierarchy with you, and engage with and accept or reject your key point, how would you suggest I do it?
Hi Carl,
Are the adjusted lower numbers based on calculations such as these?
Re: Open Phil and clean meat—I don’t think the point about researchers is as strong as you imply. Do researchers claim that clean meat will become cost-competitive? It’s possible that their goal is to get fake meat to the point where it was cheap enough that it could be sold to vegetarians, which would still do a lot of good, but not nearly as much.
Basically, I agree with the point that they expect to get useful results and so we should too. But do we know that the useful result in question is “cost-competitive clean meat” and not something weaker? I legitimately don’t know what the answer is, having not researched this in detail. I skimmed the Open Phil report a while ago, and my impression was that researchers didn’t dispute the claim that it would be very difficult to get to the point where it is cost-competitive. (It’s decently likely though that I’m misremembering.)
I have spoken to researchers who expect clean meat to become cost-competitive. They have said this explicitly, and also implied it by putting themselves in positions where their financial welfare depends on clean meat making a profit.
My understanding is that the OpenPhil report didn’t say there weren’t people claiming it would be cost-competitive, but that they didn’t offer a vision plausibly accounting for all the costs in which they could achieve it (or provide one in response to questions), or offer convincing responses to the analysis of skeptical scientists.
If these researchers can provide such an account it would be really helpful for them to publish it or tell it to OpenPhil (which is spending large amounts on factory farming, ramping up, and prioritized meat substitutes as an early scientific investigation).
Also, as I have been suggesting to animal advocates, including Lewis Bollard, this is a good topic for registering quantitative predictions (about companies, sales, investment, scientific milestones, costs, etc), trying to open a topic at Good Judgment Open and bets, as a strategically relevant area with value-aligned people on different sides of an empirical question.
[Speaking only for myself.]
From speaking to individuals involved in the field, I believe the people Open Phil spoke to did not offer enough information to form a full vision toward near-term cost reduction. This is likely the result of a combination of three things:
Some of the people who they spoke to did not have in-depth technical familiarity with the particular technological advancements required to bring the cost down.
Some of the people who they spoke to may have had access to such information but were unwilling to share it in-depth with the Open Phil researchers for obvious reasons (namely, they work at companies with proprietary information).
There are a number of individuals with such in-depth knowledge who Open Phil did not speak to, whether because they did not reach out to them or for other reasons.
Researchers have in fact provided accounts that were satisfactory to either private donors with access to the information or VC funds backing private ventures in the space. These accounts are usually not public; however, GFI has begun to and will continue to explain publicly exactly how cost reduction can happen. For example, at a recent conference, two GFI scientists each gave a presentation explaining exactly how we are going to bring the cost of media down and detailing the other plausible technical advances necessary to bring costs down over the coming years. GFI will likely continue to publish such materials moving forward.
I have proposed betting above. If you believe clean meat has low odds of succeeding, perhaps you can make some money off of me.
OpenPhil has given a million dollars to GFI.
I would hope that in future this information will reach OpenPhil, as it is spending in the 8-figure range on factory farming (given a good account, it seems like an exceptionally high-return fundraising thing to do, for example), and look forward to seeing what happens with that.
I’m trying to create a meeting of the minds or such between OpenPhil and those who disagree with its perception of the cost feasibility. Ideally I would like to see agreements or quantified disagreement along those lines.
I’m raising the issue to see what is offered up, and because there is a clear inefficiency (at least one party is wrong and lots of money would change allocation either way), so addressing that looks disproportionately valuable (and cheap VOI generally comes ahead of implementation).
After I see the back-and-forth I may make a forecast myself.
[Writing only for myself.]
Carl, I’d be really interested in seeing any content originating from these discussions.
Michael, Which conference is this? Are there any videos available for these talks?
Video is not available, although I heard it might be at some point in the future.
Cool, can you give me the conference name? That way I can follow-up with a Google search in a few weeks or months.
The title of the conference was Second International Conference on Cultured Meat.
Related article: https://www.clearlyveg.com/blog/2016/11/13/reflections-the-second-international-conference-cultured-meat
Thanks.
This year I spoke with three charities (ACE, GFI, and REG). I narrowed down to a list of finalists using only public information, and I didn’t feel the need to speak to my other finalist, MFA. The three I spoke with are unusually transparent, and I don’t believe a random sample of charities would have the same level of forthrightness that these did.I asked them all similar questions, so I don’t know how sensitive they are to the wording of the messages.
Thanks for writing this up! Have you taken into account the effects of reductions in animal agriculture on wildlife populations? I didn’t see terms for such effects in your cause prioritization app.
I’m certainly aware of that consideration. I didn’t include it because it seemed too speculative and not worth the effort (I thought it was unlikely that I’d change my mind about anything based on the result). I don’t have a monopoly on cost-effectiveness calculations though, you can write your own, or even fork my code if you know a little C++.
What do you mean by “too speculative”? You mean the effects of agriculture on wildlife populations are speculative? The net value of wild animal experience is unclear? Why not quantify this uncertainty and include it in the model? And is this consideration that much more speculative than the many estimates re: the far future on which your model depends?
Also, “I thought it was unlikely that I’d change my mind” is a strange reason for not accounting for this consideration in the model. Don’t we build models in the first place because we don’t trust such intuitions?
I don’t actually remember what I meant by “too speculative”, that was probably not a helpful thing to say.
There are thousands of semi-plausible things I could try to model. It would not be worth the effort to try to model all of them. I have to do some pre-prioritization about which things to model, so I pick the things that seem the most important. Specifically, I think about the expected value of modeling something (how likely am I to change my mind and how much does it matter if I do?) and how long it will take to see if it seems worth it. Sometimes I do this explicitly and sometimes implicitly. I don’t really know how likely I am to change my mind, but I can make a rough guess. It wouldn’t make sense to try to model thousands of things on the off chance that any of them could change my mind.
If you would like to see how factory farming affects wild animal suffering, by all means create a cost-effectiveness estimate and share it.
Thanks, Michael. When you talked to GFI or researched them did you find anything indicating they would be able to meaningfully spend $2.6 million? Or are you taking it on faith given your positive interactions with you had with them and their strong evidence mindedness?
I ask because I’m similarly interested in donating to spur the development of animal product alternatives. To date, it’s seemed like GFI has had no issue raising money.
New Harvest has also done pretty well in raising money, and one thing I’ve liked about them is that they continually find new projects to fund as they get money. They have a great track record for that and have a very nice plan to help make it easier for others to produce cultured products—through creating cell lines and promoting open research.
GFI being about a year old doesn’t have the same track record and given their ability to raise substantial sums anyway, I’ve been more inclined to donate to New Harvest, though I’m watching GFI closely.
I’m curious for your thoughts.
For visibility: Bruce Friedrich from GFI replied here.
I talked with folks at GFI about their plans, they have a budget for $2.6M that sounds reasonable to me. I have no doubt that they could spend that much, and they could probably spend a lot more than that since they’re trying to do a lot of stuff. I’m not concerned about that. What I am concerned about is:
Does more funding help GFI today, even though it’s still working on scaling up?
Could GFI just raise the money anyway?
I think there’s a decent chance that GFI could raise the money anyway, like I said in my original post. But donating means they have to spend less effort on fundraising, and it helps in the scenarios where GFI struggles to raise money (which I expect won’t happen but it’s not that implausible).
Hi All,
Sorry for my delayed response; I’ve been traveling. To Jason’s questions:
Can we spend this money effectively & efficiently? I believe that we could spend significantly more than $2.6 million effectively and efficiently. We have seven departments at GFI, and five of them (innovation, science & tech, policy, international engagement, & corporate engagement) would profit immensely from more staff.
One of the first things I did with our current GFI staff was to work with them to create plans that include goals, metrics, and expansion plans. As of right now, we believe that we could spend about $4 million/year efficiently and effectively, mostly by expanding our innovation, scientific, and international programs. To be clear, I have no doubt that we could spend much more than that on our mission without any fluff; we just haven’t planned yet beyond $4 million/year.
GFI’s Success with Development: We are aware (and profoundly grateful) that our mission and programs have proven appealing to donors that care about ensuring that their donor dollars are spent based on EA principles; we were founded explicitly as an EA nonprofit, so EA principles are the basis for 100 percent of our decisions. But launching a new organization and attracting the necessary support in start-up mode is one thing. Maintaining this momentum along with a robust revenue stream in the years to come is a very different challenge and one we need be aware of and plan for now.
We don’t take for granted that it will be straightforward to maintain the current levels of support, and we will be continuously assessing how best to maintain and grow our funding base. A few things that we’re thinking about in this regard are:
First, GFI is in our launch year, so securing all the resources to fund the first full 12 months of operations is key, so that we’re not distracted from or delayed in rolling out our four programs as planned.
Second, a part of that $2.6 million is the creation of an operating reserve, so that no one is at risk of losing her/his job; we see this as a fiduciary obligation for any nonprofit and perhaps especially for a new one like GFI. Fortunately, the Open Philanthropy Project agreed with this assessment and was happy to see a significant portion of their donation go into creating this reserve.
Third, an ancillary point: Gift support trails off significantly in Q1 and Q2 (and even into Q3). Therefore, it’s vital for GFI to secure a robust input of gifts and grants in Q4 of 2016 (and in future Q4’s) , to “smooth out” this seasonal dip in revenues and ensure that all programs can continue without abatement.
And of course, a reserve also safeguards against external factors that are outside GFI’s control and that might affect future revenues, such as dips in the economy that adversely affect giving, something I’ve experienced repeatedly in the past, most notably in 2001 after 9-11 and in 2008 during the economic downturn. It certainly seems possible that such a downturn could be coming, considering our current political situation.
New Harvest Comparison: I love New Harvest (I am a donor and have been a fan since Jason Matheny founded the group), but of course GFI’s mission and focus are different from New Harvest’s. As Jason notes, NH directly funds research in cellular agriculture and self-identifies as a research institute. We think that’s great, but it’s a different approach from ours. First, we focus on both plant-based and cellular alternatives to animal agriculture. Second, we have four program areas, three of which do not overlap with NH’s focus much or at all. That sort of research is a part of our program area four, though we’re focused on raising money for research rather than funding it directly. And we seek out the researchers who will be best-positioned to answer the most critical questions in plant-based and cellular agriculture. We just hired the project manager for this project (see www.gfi.org/our-team, Erin Rees Clayton), who has a Ph.D. from Duke and extensive experience in grant-writing. I’m happy to discuss our thinking on this (and anything else) with anyone who is interested, of course.
So in conclusion, we are convinced that our initial fundraising goal of $2.6 million is what’s required to ensure that our first year of operations is fully funded and that we’re meeting our fiduciary obligations to both programs and staff.
Should a steady stream of additional resources become available through increased philanthropic support, we believe that we will be able to spend that money effectively and efficiently. Our main issue will be hiring at a reasonable pace, so that everyone is fully trained and so that we maintain our commitment to exclusively exceptional staff.
GFI is strongly committed to transparency, and we are more than happy to share our Strategic Plan and expansion plan thinking with anyone who is interested in learning more.
I am willing to take bets on the following proposition:
I may accept alternative propositions as well.
Caveat: I have spoken to some of the people involved in clean meat work which may have given me access to information that is not available to the general public.
(Edit: Changed bet wording.)
Supermeat (cultured chicken producer) in Israel now has a test kitchen open to the public, but they have not started charging yet and it appears that they may need to scale up production somewhat before regularly serving cultured animal tissue.
I now think it’s more likely than not that you’ll win the bet, but it looks like it’ll be fairly close.
Nick Beckstead and I have agreed to bet $1000 at even odds on the proposition
Buck Shlegeris has agreed to bet his $2800 against my $2000 on the proposition
Update: I’ve agreed to be the arbiter of the bet with Buck and Michael. My current working definition of “regularly” is something like
There should probably be a stipulation on to what extent buying cultured meat is rate-limited as well.
Let me know if any EA forum reader reading this hears of something that might plausibly fulfill the bet resolution.
This bet resolves positive.
Woop, thanks for following up on this! I am always very happy when long-term bets like this get resolved.
I don’t know if this meets all the details, but it seems like it might get there: Singapore restaurant will be the first ever to serve lab-grown chicken (for $23)
-
If we develop cost-competitive clean meat within the next 5 years, it will probably take another 5-10 years before fast food chains start serving them (and may take longer in the US because the USDA may have to approve it first, which could take a long time). So I don’t think there’s a high probability that fast food chains will adopt clean meat by 2021, although this has little to do with my beliefs about when it will achieve cost-competitiveness. Even if it were cost-competitive as of right now, I still wouldn’t expect to see clean meat in fast food chains within 5 years.
Betting on fast food chains increases more dependencies in the bet—instead of just betting on when clean meat will be cost-competitive, we’re betting on how quickly it will achieve widespread acceptance and production will scale up to a national level. I would prefer to make a simpler bet that’s purely about cost-effectiveness.
In looking over your quantitative model on cage-free campaigns, I notice that you conclude cage-free campaigns have a mean direct effect of ~204 QALYs per $.
ACE also has a model of cage-free campaigns (actual inputs are in the bottom-right quadrant) and they conclude cage-free campaigns have a mean direct effect of 65 animals spared per dollar. This isn’t in QALYs, but since they’re all chickens and egg-laying chickens live an average of 1 year, I assume they mean ~65 QALYs per $.
These two estimates differ by an order of magnitude and I’m unsure who is right. You could both be right, since this is an estimate for MFA in particular, rather than corporate campaigns as a whole, and ACE seems to give less weight toward MFA on these campaigns.
But the more important part is that ACE’s estimate also includes a substantial risk of negative impact, which is more concerning. This comes from the fact that ACE thinks there is a substantial probability that the impact to animal welfare could be net negative, perhaps stemming from the discussion between Direct Action Everywhere and OpenPhil.
I was curious what you thought about the differences in your two models, especially with regard to the risk of cage-free being negative?
I do realize this isn’t particularly important because you ultimately do not make a cage-free donation, but it seems like food for thought. It’s also interesting to evaluate model uncertainty by comparing two independent models.