Thanks for both of your responses (@Jacob_Peacock and @abrahamrowe). I was going to analyse the podcast in more detail to resolve our different understandings, but I think @BruceF ’s response to the piece clarifies his views on the “negative/positive” PTC hypothesis. The views that he would defend are: (negative) “First, if we don’t compete on price and taste, the products will stay niche, and meat consumption will continue to grow.” and (positive) “Second, if we can create products that compete on price and taste, sales will go up quite a lot, even if other factors will need to be met to gain additional market share.”
I expect that these two claims are less controversial, albeit with “quite a lot” leaving some ambiguity.
My initial response was based on my assumption that everyone involved in alt protein realises that PTC-parity is only one step towards widespread adoption. But I agree that it’s worth getting more specific and checking how people feel about Abraham’s “how much of the work is PTC doing- 90% vs 5%?” question.
I assume if you surveyed/ interviewed people working in the space, there would be a fairly wide range of views. I doubt if people have super-clear models, because we’re expecting progress in the coming years to come on multiple fronts (consumer acceptance, product quality, product suitability, policy, norms), and to mutually reinforce each other, but it would be worth clarifying so that you can better identify what you’re arguing against.
From my own work on alt-protein adoption in Asia I sense that PTC-parity is only a small part of the puzzle, but it would also be far easier to solve the other pieces if we suddenly had some PTC-competitive killer products, so PTC interact with other variables in ways that make it difficult to calculate.
Overall, I stand by my criticism that I don’t think the positive PTC-hypothesis as you frame it is commonly held. But I’d like to understand better what the views are that you’re critiquing. It would be interesting to see your anecdotal evidence supported- what people actually think when they say they (previously) bought into PTC, and who these people are. It could be true, for example, that people who work in PBM startups tend to believe more strongly that a PTC-competitive product will transform the market, but people working on the market side tend to realise how many barriers there are to adoption beyond these factors.
Jack_S
Stakeholder-engaged research in emerging markets: How do we unlock growth in Asian animal advocacy?
Thanks for this article, I agree with a lot of the takeaways, and I think that more research into developing an evidence-based theory of change for short- and long-term uptake of alt proteins is very valuable.
But I think the problem with arguing against an informal hypothesis is that I don’t think you’re actually arguing against a commonly-held view.
This is how you frame it:
“The price, taste, and convenience (PTC) hypothesis posits that if plant-based meat is competitive with animal-based meat on these three criteria, the large majority of current consumers would replace animal-based meat with plant-based meat.”
I’ll call it the “positive-PTC hypothesis”, the idea that if we achieve PTC-parity, the market will automatically shift. I don’t think anyone in the space holds this view strongly. To the extent that they do stress PTC over other factors, the sources you quote seem to put more emphasis on the ‘negative-PTC’ hypothesis- achieving PTC-parity is a necessary but not sufficient criteria for people to start considering PBM.
Szejda et al. say:
”… only after a food product is perceived as delicious, affordable, and accessible will the average consumer consider its health benefits, environmental impact, or impact on animals in the decision to purchase it.”This negative-PTC hypothesis also seems to be implied
moreto some extent in the Friedrich 80k podcast you refer to. He also says explicitly that he doesn’t think everyone would switch to PTC-matched PBM (hence the need for cell-cultured meat).
There’s a bit of positive-PTC in the GFI research program RFP (2019) claim that “alternative proteins become the default choice” (both cultured and PBM), but even then it’s not exclusively PTC, they also refer to these proteins winning out on perceptions of health and sustainability, and requiring product diversity.
As well as this, every source you quote, and every paper I’ve ever read on PB meat acceptance, also stresses a bunch of other factors besides PTC. In particular, the main report you associate with PTC (Szejda et al. 2020) stresses familiarity throughout the report. “While many people have favorable attitudes toward sustainability and animals, the core-driver barriers to acting on these attitudes are too strong for most. More than anything, products that meet taste, price, convenience, and familiarity expectations will reduce these barriers”. Familiarity in itself could go a long way to explaining the negative results in all the studies you refer to: all are comparing an unfamiliar product with a familiar product.
So I’d argue that very few people in this space actually support the PTC hypothesis as you frame it. Few people think that PTC-parity is sufficient for widespread PBM uptake.
Having said that, I think there probably is an interesting, genuine divergence of views with people who hold a PTC+ hypothesis and those who hold a more “holistic” view. So if a diverse range of alt proteins achieve parity in price, taste and convenience, while also being positively perceived in terms of familiarity, health, environment, status, safety etc., some might believe that there will be an inevitable shift to these products, while others would think that meat and carnism is so embedded within our cultural and social norms that even if we get overwhelming good alternatives, the majority of the population would still be very unlikely to stop eating meat. It’s an interesting question, but one that I don’t think you’ve answered in this piece.
If I recall, it was only really in the 2010s, following the release of this study (catchily named HPTN 052), that we realised that ART/ ARV was so effective in stopping HIV transmission, so I think that was a justifiable oversight.
Assuming that prices will remain constant seems to be a genuine issue—I think we need to think about this more when we look at cost-effectiveness generally—but I have an inkling as to why this might be common.
In Mead Over’s (Justin’s colleague) excellent course on HIV and Universal Health Coverage, we modelled the cost effectiveness of ART compared to different interventions. The software package involved constant costs for ART (and second line ART) as a default setting, and didn’t assume that there would be price reductions. I didn’t ask why this was, but after adding price reductions to the model for my chosen country (Chad), I realised that the model then incentivises delaying universal ART within a country, and instead focusing on other interventions which are less likely to decrease in cost over time.
Delaying might be wise in some contexts, but I’m sure many health ministers are just looking for excuses to delay action (letting other countries bring the price down first), so politics doubtless plays a role.
Good point. I believe all the dollar figures I cited aren’t inflation-adjusted, which is probably the main difference between sales by weight and sales by dollar.
I haven’t got a very well calibrated model, but I’m still fairly optimistic about alt proteins becoming increasingly commercially viable. I would update very little on 2022 being a fairly bad year for a few reasons:
I don’t think the article’s claim that ‘sales of PBM declined significantly in 2022’ is actually true. According to the lastest GFI plant-based meat report: ”...global dollar sales of plant-based meat grew eight percent in 2022 to $6.1 billion, while sales by weight grew five percent.” In Europe, sales grew by 6%, so it’s really just the US.
In the US, “estimated total plant-based meat dollar sales increased slightly by 2% while estimated pound (lb) sales decreased by 4%” (GFI, 2023), so a very marginal decline in demand. I think the narrative that the market collapsed in 2022 is more due to the more significant decline in home/ refrigerated purchasing of PBM, and perhaps because Beyond Meat had an awful year.
Other metrics are looking okay:
- Total invested capital in alt protein is still comparable with other food sectors, with major increases in cultivated and fermented meat investment.
- Government investment is growingand locked-in as part of multi-year programs in many countries
- Many new production facilities are springing up, which should reduce prices, especially for fermented and cultivated products.
- I think we’ve reached the turning point for national safety approvals for cultivated meat, and I expect more (such as China and Japan) in 2023-2024
- There has been some pretty considerable investment in certain emerging markets (APAC and MENA)Even if it has been a bad year in some respects, it’s been a bad year for everyone. Conventional meat sectors have also suffered in 2022- particularly in the UK and EU. And growing tech sectors have also slowed—quantum computing investment also flattened off from a rapid growth trajectory.
So I wouldn’t update too much on 2022′s slowdown- it’s a combination of macroeconomic factors, Russia-Ukraine, high interest rates and rising energy prices that have reduced investment, and inflation that’s led to reduced consumption.
As for your question of whether alternative proteins will take off. I’m optimistic. I think we’ve entered a situation where:
Many conventional/ institutional food producers have bought into alt proteins, so there’s less institutional resistance
Most governments are at least partly supporting the transition to alt proteins, especially import-dependent countries with food security issues like Israel, Singapore and the UEA
Young people across a lot of the world are increasingly into alt proteins (especially milks), so the demographic shift will work in favour of alt proteins over the next decade
I feel that we have the right incentives and technology to produce super tasty, affordable alternatives for various consumer types, and when we get these products, the market should start to grow.
The reasons I could be wrong might be:
- We’ll continue to lack those killer products at an affordable price. The market will be dominated by poor, low-cost products, giving alt protein a worse reputation among many normal consumer groups.
- It’s only ever going to be a niche market. Almost everyone actually prefers conventional meat, and most consumers are more resistant to change than we currently think. Cell-cultivated meat will be seen as the only viable alternative, and people will continue consuming conventional meat until cell-cultivated meat hits price parity.
Developing Farmed Animal Welfare in China—Engaging Stakeholders in Research for Improved Effectiveness
I’d argue that EA is quite bad at something like: “Engaging a broad group of relevant stakeholders for increased impact”. So getting loads of non-EA people on your side, and finding ways to work together with multiple, potentially misaligned orgs, governments and individuals.
Don’t want to overstate this- some EA orgs do this well. Charity Entrepreneurship include stakeholder engagement in their program, for example. But it seems neglected in the EA space more generally.
Diagnosing EA Research- Are stakeholder-engaged methods the solution?
Video: A Few Exciting Giving Opportunities for Animal Welfare in Asia
Yeah, makes sense. I just don’t know why it’s not just: “It’s conceivable, therefore, that EA
community buildinghas net negative impact.”
If you think that EA is/ EAs are net-negative value, then surely the more important point is that we should disband EA totally / collectively rid ourselves of the foolish notion that we should ever try to optimise anything/ commit seppuku for the greater good, rather than ease up on the community building.
I think I’m not following the first stage of your argument. Why would the FTX fiasco imply that community building specifically (rather than EA generally) might be net-negative?
I definitely don’t think it’s too much to expect from a self-reflection exercise, and I’m sure they’ve considered these issues.
For no. 1, I wouldn’t actually credit growth so much. Most of the rapid increases in life expectancy in poor countries over the last century have come from factors not directly related to economic growth (edit: growth in the countries themselves), including state capacity, access to new technology (vaccines), and support from international orgs/ NGOs. China pre- and post- 1978 seems like one clear example here- the most significant health improvements came before economic growth. Can you identify the ‘growth miracles’ vs. countries that barely grew over the last 20 years in the below graph?I’d also say that reliably improving growth (or state capacity) is considerably more difficult than reliably providing a limited slice of healthcare. Even if GiveWell had a more reliable theory of change for charitably-funded growth interventions, they probably aren’t going to attract donations- donating to lobbying African governments to remove tariffs doesn’t sound like an easy sell, even for an EA-aligned donor.
For 2, I think you’re making two points- supporting dictators and crowding out domestic spending.On the dictator front, there is a trade-off, but there are a few factors:
I’m very confident that countries with very weak state capacity (Eritrea?) would not be providing noticeably better health care if there were fewer NGOs.
NGOs probably provide some minor legitimacy to dictators, but I doubt any of these regimes would be threatened by their departure, even if all NGOs simultaneously left (which isn’t going to happen). So the marginal negative impact of increased legitimacy from a single NGO must be very small.
On the ‘crowding out’ front, I don’t have a good sense of the data, but I’d suspect that the issue might be worse in non-dictatorships- countries/ regions that are easier/ more desirable for western NGOs to set up shop, but where local authorities might provide semi-decent care in the absence of NGOs. This article illustrates some of the problems in rural Kenya and Uganda (where I think there’s a particularly high NGO-to-local people ratio).
I suspect GiveWell’s response to this is that the GiveWell-supported charities target a very specific health problem- they may sometimes try to work with local healthcare providers to make both actors more effective, but, if they don’t, the interventions should be so much more effective per marginal dollar than domestic healthcare spending that any crowding effect is more than canceled out. Many crowding problems are more macro than micro (affecting national policy), so the marginal impact of a new effective NGO on, say, a decision whether or not to increase healthcare spending, is probably minimal. When you’ve got major donors (UN, Gates) spending billions in your country, AMF spending a few extra million is unlikely to have a major effect. But I’m open to arguments here.
This definitely crossed my mind. Assuming he expected it to be published and he’d guess how bad his responses would look, this would be one of the few rational explanations for his sudden repudiation of ethics.
But it also seems fairly likely that his mind is in a pretty chaotic place and his actions aren’t particularly rational, though.
Surely everyone on this thread realises that there should be a relevant distinction between being some random hack and ‘the EA journalist’. We’re holding her to higher standards than general journalistic norms.
Thanks for writing this, great to hear that you’re feeling better.
I’m usually a fan of self-experimentation, and the upside of finding an antidepressant with few side-effects (and that you can take a lower dose of) is definitely valuable. This seems especially true if you can stop taking it during better mental health periods, then have it in your arsenal for future use. But I still have a few doubts about this process, and I’m a little concerned that some of the premises behind your experiment need a bit more scrutiny. I hope someone with a bit more domain-specific knowledge can correct me if I’m wrong, or improve my arguments if I have a point. I’m also aware that there’s no such thing as a ‘perfect self-experiment’, and I don’t think there are obvious ways that you could have improved the experiment. But here are a few things that I’d like to hear your thoughts on:
Firstly, the depression episode was triggered by an disruptive external factor- the pandemic. This would probably invalidate any observational study held in the same period. As this external factor improved, and people could start travelling/ socialising normally, you might expect symptoms to lift naturally from mid-2021 onwards. From what you’ve mentioned here, you don’t seem to have disproved this hypothesis. I gather that depressive episodes seem to last a median of about 6 months ( see pic below), with treatment not making a huge difference for duration within the first year (some obvious caveats about selection effects here). How do you consider the possibility that you would have recovered without antidepressants?
Secondly, the process of switching between 5⁄6 antidepressants seems to be a significant confounding factor here. I don’t know how good the evidence base for the guidelines link you sent was, but it seems likely that the multiple (start, side-effects, ending, potential relapse) effects of antidepressants are significant enough to really mess up any attempts to have a ‘clean slate’ between treatments, and to therefore make it a unfair comparison. It seems possible that what you thought was a negative reaction to x medicine was actually contingent on having just tapered off y medicine and/ or experiencing a relapse. Does that seem plausible, or do you think that there was a stable enough baseline for comparisons to be valid?
Third, just a bit of concern about the downsides of the experiment. There are some long-term side-effects to antidepressants, and they seem understudied for fairly obvious reasons (most clinical studies only last for 6 months, no long-term RCTs). There seems to be a few studies that point to longer-term risks and ‘oppositional effects’ being underestimated. Unknown confounding factors and additional health risks from going on and off antidepressants would make me very concerned. Obviously, untreated depression also has a range of health risks, so I don’t want to discount the other side of the ledger, but I would definitely not be confident that I was doing something safe. How confident do you feel in your comparison of these risks? And did you feel that you had to convince yourself against (potentially irrational) fear of over-medication?
Finally, a bit unrelated, there’s a meta question that often comes to mind when I read posts about more rational/ self-experimenting approaches to health issues, which is: “How strong should our naturalistic bias/heuristic be when approaching mental health/ general health issues?” Particularly for my own health, I have a moderate bias against less ‘natural’ (obviously a very messy term, but I think it’s useful) health solutions. I often feel EAs have the opposite bias, preferring pharmacological solutions, perhaps because they can be tested with a nice clean RCT. I’m interested what level of bias you, (and forum readers), think is optimal.
“If we want to draw in more experienced people, it’d be much easier to just spin up another brand, rather than try to rebrand something that already has particular connotations.”
This strikes me as probably incorrect. Creating a new brand is really hard, and minor shifts in branding to de-emphasise students would be fairly simple. In my experience, the EA brand and EA ideas are sufficiently appealing to a fairly broad range of older people. The problem is that loads of older people are really interested in EA ideas- think Sam Harris’ audience or the median owner of a Peter Singer book- but they find that: a) It’s socially weird being around uni students; b) Few of the materials, from 80k to Intro fellowships, seem targeted to them; c) It’s way harder to commit to a social movement.
I’ve facilitated for EA intro programs with diverse ages, and the ‘next steps’ stage at the end of an intro fellowship is way different for 20 year olds to 40 year olds- for a 20 year old, basically “Just go to your uni EA group and get more involved” is a good level of commitment, whereas a 40 year old has to make far more difficult choices. But I also feel that if this 40 year-old is willing to commit time to EA, this is a more costly signal than a student doing so, so I often feel bullish about their career impact.My preferred solutions are fairly marginal, just making it a bit easier and more comfortable for older people to get involved: 1) Groups like 80k put a bit more effort into advice for later career people; 2) Events targeting older high-impact professionals (and more ‘normal’ older people; EA for parents is a good idea); 3) Highlight a few ‘role models’ (on the EA intro course, for example, or an 80k podcast guest)- people who’ve become high-impact EAs in later life.
The claim that we wouldn’t see similar evolution of moral reasoning a second time doesn’t seem weird to me at all. The claim that we should assume that we’ve been exceptionally / top 10%- lucky might be a bit weird. Despite a few structural factors (more complex, more universal moral reasoning develops with economic complexity), I see loads of contingency and path dependence in the way that human moral reasoning has evolved. If we re-ran the last few millennia 1000 times, I’m pretty convinced that we’d see significant variation in norms and reasoning, including:
Some worlds with very different moral foundations- think a more Confucian variety of philosophy emerging in classical Athens, rather than Socratic-Aristotelian philosophy. (The emergence of analytical philosophy in classical Athens seems like a very contingent event with far-reaching moral consequences).
Some worlds in which ‘dark ages’, involving decay/ stagnation in moral reasoning persisted for longer or shorter periods, or where intellectual revolutions never happened, or happened earlier.
Worlds where empires with very different moral foundations than the British/ American would have dominated most of the world during the critical modernisation period.
Worlds where seemingly small changes would have huge ethical implications- imagine the pork taboo persisting in Christianity, for example.
The argument that we’ve been exceptionally lucky is more difficult to examine using a longer timeline. We can imagine much better and much worse scenarios, and I can’t think of a strong reason to assume either way. But with a shorter timeline we can make some meaningful claims about things that could have gone better or worse. It does feel like there are many ways that the last few hundred years could have led to much worse moral philosophies becoming more globally prominent- particularly if other empires (Qing, Spanish, Ottoman, Japanese, Soviet, Nazi) had become more dominant.
I’m fairly uncertain about this later claim, so I’d like to hear from people with more expertise in world history/ history of moral thought to see if they agree with my intuitions about potential counterfactuals.
Agree with this completely.
The fact that this same statistical manoeuvre could be used to downplay nuclear war, vaccines for diseases like polio, climate change or AI risk, should also be particularly worrying.
Another angle is that the number of deaths is directly influenced by the amount of funding- the article says that “the scale of this issue differs greatly from pandemics”, but it could plausibly be the case that terrorism isn’t an inherently less significant/ deadly issue, but counterterrorism funding works extremely well- that’s why deaths are so low.
https://www.morganstanley.com/ideas/obesity-drugs-food-industry This study doesn’t make Semaglutide look especially promising for animal welfare (increase in poultry and fish), but I’m not sure how rigorous the research is, so I’d be excited to read other sources.