# Modelling the Good Food Institute—Oxford Prioritisation Project

By Do­minik Peters

Cross-posted from the Oxford Pri­ori­ti­sa­tion Pro­ject blog.

Created 2017-04-18. Re­vised 2017-05-19. We’re cen­tral­is­ing all dis­cus­sion on the Effec­tive Altru­ism fo­rum. To dis­cuss this post, please com­ment here.

We have at­tempted to build a quan­ti­ta­tive model to es­ti­mate the im­pact of the Good Food In­sti­tute (GFI). We have found this ex­cep­tion­ally difficult due to the di­ver­sity of GFI’s ac­tivi­ties and the par­tic­u­larly un­clear coun­ter­fac­tu­als. In this post, I ex­plain some of the mod­el­ling ap­proaches we tried, and why we are not satis­fied with them. This post as­sumes good back­ground knowl­edge about GFI, you can read more at An­i­mal Char­ity Eval­u­a­tors.

# Ap­proach 0: Direct estimation

Our first model of GFI in­volved di­rectly es­ti­mat­ing by how many years a fully funded GFI would ac­cel­er­ate the ar­rival of chicken product sub­sti­tutes. Our first in­tu­ition was to put this at 5 years, but we re­al­ised that we had next to no in­tu­itive grasp at all on this figure. So we at­tempted to find ap­proaches that in­volved es­ti­mat­ing quan­tities we have a bet­ter in­tu­itive grasp on.

# Ap­proach 1: Mul­ti­plier on in­vest­ment into an­i­mal substitutes

A dona­tion of \$1 to GFI in­creases the amount of in­vest­ments (by VCs, gov­ern­ment re­search coun­cils, other grant-mak­ing in­sti­tu­tions) by \$X, through cre­at­ing new in­vest­ment op­por­tu­ni­ties (like start-ups) and by mak­ing the field more at­trac­tive in gen­eral.

For ex­am­ple, New Har­vest was in­stru­men­tal in start­ing com­pa­nies work­ing on yeast-based re­place­ments for dairy and egg prod­ucts, by in­tro­duc­ing fu­ture founders to each other and giv­ing them a small start-up grant (of about \$30-50k each). Th­ese com­pa­nies sub­se­quently source ad­di­tional in­vest­ment of about \$3m. How­ever, we do not be­lieve that this is very in­for­ma­tive for es­ti­mat­ing fu­ture mul­ti­pli­ers, since New Har­vest might have picked very low-hang­ing fruit in these in­stances.

Next, we would need some ac­count of how this in­creased in­vest­ment would have ac­cel­er­ated the de­vel­op­ment of meat al­ter­na­tives (so as to pro­duce value). For this, we would need to es­ti­mate when this in­vest­ment of \$X would oth­er­wise have oc­curred, but it is un­clear how to figure this out.

We did not come up with good strate­gies for break­ing these difficul­ties down into smaller chunks that would be eas­ier to model.

# Ap­proach 2: Direct Investment

A very sim­ple mod­el­ling strat­egy in­volves es­ti­mat­ing the rough amount of re­search effort (cap­i­tal in­vest­ments and re­search hours) that will be re­quired in to­tal and even­tu­ally to get to a “solu­tion”, i.e., availa­bil­ity of at­trac­tive sub­sti­tutes for an­i­mal prod­ucts. One could ob­tain such an es­ti­mate by enu­mer­at­ing the list of an­i­mal prod­ucts that need to be re­placed, and then look at how much effort was needed to de­velop prod­ucts such as the Im­pos­si­ble Burger. Next, one could as­sume that no-one else would ever in­vest in these op­por­tu­ni­ties. Then, by es­ti­mat­ing the value of hav­ing sub­sti­tutes and mul­ti­ply­ing by frac­tion of the to­tal effort re­quired that our dona­tion fi­nanced, we would get an es­ti­mate of the im­pact of our dona­tion.

How­ever, this ap­proach is a bit silly be­cause it does not model the ac­cel­er­a­tion of re­search: If there are no other donors in the field, then our dona­tion is fu­tile be­cause £10,000 will not fund the en­tire effort re­quired.

# Ap­proach 3: Ac­cel­er­a­tion Dynamics

How are we go­ing to reach the stage at which at­trac­tive meat sub­sti­tutes are widely available? Well, com­pa­nies and other re­search groups will have to ex­pend some amount of effort into the prob­lem, and the more cu­mu­la­tive effort has been ex­pended the closer we are to a good solu­tion. Our dona­tion to GFI could be mod­el­led as an ex­ter­nal “shock” to the amount of effort in­vested into the field from that point in time on­wards. Graph­i­cally, this could look like this:

Whether the un­per­turbed curve is lin­ear is un­clear; it could be con­vex.

Now, with ad­di­tional effort in­vested into the prob­lem, we are get­ting closer to a solu­tion, and in par­tic­u­lar the qual­ity of meat sub­sti­tutes available in­creases. Again, it is not ob­vi­ous how the qual­ity of these prod­ucts is func­tion­ally re­lated to the amount of cu­mu­la­tive effort ex­pended; one pos­si­ble shape would be an S-curve (which in­creases rapidly af­ter some ini­tial break­throughs have been achieved, and flat­tens out when perfect­ing things), or it could be a curve in­di­cat­ing diminish­ing re­turns through­out (if we think that in­creas­ing qual­ity be­comes harder and harder), or many other pos­si­ble shapes (con­sist­ing of many sep­a­rate dis­cov­er­ies), or ex­po­nen­tial (like in Moore’s law). Differ­ent choices of shapes im­ply differ­ent mag­ni­tudes of im­pact, and we found no good way of figur­ing out which shape fits the par­tic­u­lar situ­a­tion.

# Conclusion

We quickly be­came dis­satis­fied with each of the mod­el­ling ap­proaches we tried. They ei­ther had ma­jor flaws (like failing to model ac­cel­er­a­tion dy­nam­ics) or did not suc­ceed in ac­tu­ally break­ing down our un­cer­tainty into smaller, more man­age­able com­po­nents.

• We have found this ex­cep­tion­ally difficult due to the di­ver­sity of GFI’s ac­tivi­ties and the par­tic­u­larly un­clear coun­ter­fac­tu­als.

Per­haps I am not un­der­stand­ing but isn’t it pos­si­ble to sim­plify your model by hon­ing in on one par­tic­u­lar thing GFI is do­ing and pre­tend­ing that a dona­tion goes to­wards only that? Ox­fam’s im­pact is no­to­ri­ously difficult to model (too big, too many coun­ter­fac­tu­als) but as soon as you only look at their dis­aster man­age­ment pro­grams (where they’ve done RCTs to show­case effec­tive­ness) then sud­denly we have far bet­ter cost-effec­tive­ness as­surance. This ap­proach wouldn’t grant a cost-effec­tive­ness figure for all of GFI, but for one of their ini­ti­a­tives at least. Do­ing this should also dras­ti­cally sim­plify your coun­ter­fac­tu­als.

I’ve read the full re­port on GFI by ACE. Both it and this post sug­gest to me that a broad cap­ture-ev­ery­thing ap­proach is be­ing un­der­taken by both ACE and OPP. I don’t un­der­stand. Why do I not see a sys­tem­atic list of all of GFIs pro­jects and ac­tivi­ties both on ACE’s web­site and here and then an in­cre­men­tal sys­tem­atic re­view of each one in iso­la­tion? I re­al­ize I am likely sound­ing like an ob­nox­ious physi­cist en­coun­ter­ing a new sub­ject so do note that I am just con­fused. This is far from my area of ex­per­tise.

How­ever, this ap­proach is a bit silly be­cause it does not model the ac­cel­er­a­tion of re­search: If there are no other donors in the field, then our dona­tion is fu­tile be­cause £10,000 will not fund the en­tire effort re­quired.

Could you ex­plain this more clearly to me please? With some stats as an ex­am­ple it’ll likely be much clearer. Look­ing at the de­vel­op­ment of the Im­pos­si­ble Burger seems a fair phe­nom­ena to base GFI’s model on, at least for now and at least in­so­far as it is be­ing used to model a GFI dona­tion’s coun­ter­fac­tual im­pact in sup­port­ing similar prod­ucts GFI is try­ing to push to mar­ket. I don’t un­der­stand why the ap­proach is silly be­cause \$10,000 wouldn’t sup­port the en­tire effort and that this is some­how tied to ac­cel­er­a­tion of re­search.

Re­gard­ing ac­cel­er­a­tion dy­nam­ics then, isn’t it best to just model based on the most pes­simistic con­ser­va­tive curve? It makes sense to me to think this would be the diminish­ing re­turns one. This also falls in line with what I know about clean meat. If we even­tu­ally do need (might as well as­sume we do for sake of be­ing con­ser­va­tive) to simu­late all el­e­ments of meat we’ll also have to go be­yond merely the scaf­fold­ing and growth medium prob­lem and also in­clude an ar­tifi­cial blood cir­cu­la­tion sys­tem for the meat be­ing grown. No such sys­tem yet ex­ists and it seems rea­son­able to sus­pect that the closer we want to simu­late meat pre­cisely the more our sci­en­tific prob­lems rise ex­po­nen­tially. So a diminish­ing re­turns curve is ex­pected from GFI’s im­pact—at least in­so­far as its work on clean meat is con­cerned.

• How­ever, this ap­proach is a bit silly be­cause it does not model the ac­cel­er­a­tion of re­search: If there are no other donors in the field, then our dona­tion is fu­tile be­cause £10,000 will not fund the en­tire effort re­quired.

Could you ex­plain this more clearly to me please? With some stats as an ex­am­ple it’ll likely be much clearer. Look­ing at the de­vel­op­ment of the Im­pos­si­ble Burger seems a fair phe­nom­ena to base GFI’s model on, at least for now and at least in­so­far as it is be­ing used to model a GFI dona­tion’s coun­ter­fac­tual im­pact in sup­port­ing similar prod­ucts GFI is try­ing to push to mar­ket. I don’t un­der­stand why the ap­proach is silly be­cause \$10,000 wouldn’t sup­port the en­tire effort and that this is some­how tied to ac­cel­er­a­tion of re­search.

There are two ways dona­tions to GFI could be benefi­cial: speed­ing up a paradigm-change that would have hap­pened any­way, and in­creas­ing the odds that the change hap­pens at all. I think it’s not un­rea­son­able to fo­cus on the former, since there aren’t fun­da­men­tal bar­ri­ers to de­vel­op­ing vat meat and there are some long-term drivers for it (en­ergy/​land effi­ciency, de­mand).

How­ever, in that case, it helps to have some kind of model for the dy­nam­ics of the pro­cess. Say you think it’ll take \$100 mil­lion and 10 years to de­velop af­ford­able vat burg­ers; \$1mil­lion now prob­a­bly rep­re­sents more than .1 year of speedup, since in­vestors will pile on as the tech­nol­ogy gets closer to be­ing vi­able. But how much does it rep­re­sent? (And, also, how much is that worth?) Plus, in prac­tice we might want to de­cide be­tween differ­ent meth­ods and tar­get meats, but then we need to have a de­cent sense of the re­sponses for each of those.

I agree that this is pos­si­ble. I’d say the way to go is gen­er­at­ing a few pos­si­ble de­vel­op­ment paths (paired \$/​time and progress/​\$ curves) based on his­tor­i­cal tech’s de­vel­op­ment and do­main-ex­perts’ prog­nos­ti­ca­tions, and then look­ing at marginal effects for each path.

Not hav­ing looked into this more, it seems doable but not-straight­for­ward. Note that the Im­pos­si­ble Burger isn’t a great model for full-on syn­thetic meat. Their burg­ers are mostly plant-based, and they use yeast to syn­the­size hemoglobin, a sin­gle pro­tein—some­thing that’s very much within the purview of ex­ist­ing biotech. This con­trasts with New Har­vest and Mem­phis Meats’ efforts syn­the­siz­ing mus­cle fibers to make ground beef, to say noth­ing of the even­tual goal of syn­the­siz­ing large-scale mus­cle struc­ture to repli­cate steak, etc.

And we have a lot less to go on there. Mark Post at Maas­tricht Univer­sity made a \$325,000 burger in 2013. Mem­phis Meats claimed to be mak­ing meat at \$40,000/​kg in 2016.* Mark Post also claims scal­ing up his cur­rent meth­ods could get to ~\$80/​kg (~\$10/​burger) in a few years. That’s still about an or­der of mag­ni­tude off from the main­stream, and I think you’d need some­one un­bi­ased with do­main ex­per­tise to give you a bet­ter sense of how much tougher that would be.

*Note- ac­cord­ing to Sen­tience Poli­tics’ re­port on vat meat. I haven’t listened to the in­ter­view yet.

• My thoughts, apolo­gies if I am just re­it­er­at­ing what you already know.

It seems like there are 3 very difficult things to get a bal­l­park es­ti­mate of:

1. The like­li­hood of de­vel­op­ing a suc­cess­ful fake chicken as a func­tion of the num­ber of in­vest­ment dol­lars. This seems like a sci­en­tific/​tech­ni­cal ques­tion. The im­pact-driven EA in­vestor will want to know the im­pact of his/​her \$1 on this prob­a­bil­ity, i.e., the slope of this given the likely level of other’s in­vest­ment. I.e., if I ex­pect oth­ers will in­vest \$1 mil­lion, I con­sider how the prob­a­bil­ity of a chicken differs when in­vest­ment in­creases from \$1 mil­lion to \$1 mil­lion +1. (Or to \$1 mil­lion + 1*lev­er­age mul­ti­plier, see be­low).

2. The mul­ti­plier effect of a (donated) in­vest­ment dol­lar, in­clud­ing both a. The lev­er­age of a dol­lar (how much more you could bor­row at a rea­son­able in­ter­est rate with an ad­di­tional dol­lar of equity col­lat­eral); I think this could be es­ti­mated un­der some rea­son­able as­sump­tions b. The effect of an ad­di­tional in­vest­ment dol­lar on sub­se­quent in­vestors/​al­tru­ists will­ing­ness to in­vest. This seems like the hard­est thing to calcu­late, and it is not com­pletely clear whether that effect should even be pos­i­tive. This is the ‘seed money’ ques­tion. It might be that when al­tru­ists see a greater amount of in­vest­ment already, they see their own con­tri­bu­tion as less vi­tal, and in­vest less.

3. The likely over­all dis­tri­bu­tion of to­tal amount in­vested; the \$1 mil­lion in the ex­am­ple in part 1.