As a former cultured meat scientist, I think these predictions have been off in large part because the core technical problems are way harder than most people know (or would care to admit). However, I also suspect that forecasts for many other deep tech sectors, even ones that have been quite successful (e.g. space), have not fared any better. I’d be curious to see how cultured meat predictions have done relative to plant-based meat, algal biofuels, rocketry, and maybe others.
There is also the interesting thing that, as far as I can tell, New Harvest, founded in 2004, basically failed, and we had to wait until the Good Food Institute came to push things along in 2016.
(as a point of comparison, New Harvest claims to have raised ~$7.5M in its frontpage (presumably during the whole of its existence), whereas the GFI spent$8.9M in 2019 alone)
2016: GFI is founded. Open Phil made its first animal welfare grants. GFI received its first grant from Open Phil, of $1M. GFI become an ACE top charity at the end of the year.
2017: Open Phil made another grant to GFI, of $1.5M. New Harvest was no longer recommended by ACE at the end of the year.
2019: Open Phil made another grant to GFI, of $4M.
New Harvest never received any grants from Open Phil.
Basically, it’s possible New Harvest failed because it was never really given much of a chance.
That being said, that doesn’t mean there weren’t reasons to support GFI over New Harvest in the first place. Some are discussed here.
I’ve also wondered what reasons there might be for the apparent discrepancy between these predictions and reality. I feel like the point re technical problems you emphasised is probably among the most important ones. My first thought was a different one, though: wishful thinking. Perhaps wishful thinking re clean meat timelines is an important factor for explaining the apparently bad track record of pertinent predictions. My rationale for wishful thinking potentially being an important explanation is that, in my impression, clean meat, even more so than many other technologies, is tied very closely/viscerally to something – factory farming – a considerable share (I’d guess?) of people working on it deem a moral catastrophe.
I don’t think Anders Sandberg uses the EA Forum, so I’ll just repost what Anders wrote in reaction to this on Twitter:
”I suspect we have a “publication bias” of tech predictions where the pessimists don’t make predictions (think the tech impossible or irrelevant, hence don’t respond to queries, or find their long timescales so uncertain they are loath to state them).
In this case it is fairly clear that progress is being made but it is slower than hoped for: predictions as a whole made a rate mistake, but perhaps not an eventual outcome mistake (we will see). I think this is is[sic] a case of Amara’s law.
Amara’s law (that we overestimate the magnitude of short-term change and underestimate long-term change) can be explained by exponential-blindness, but also hype cycles, and integrating a technology in society is a slow process”
Fwiw, I broadly agree. I think those in the industry making public predictions have plausibly “good” reasons to skew optimistic. Attracting the funding, media attention, talent necessary to make progress might simply require generating buzz and optimism- even if the progress it generates is at a slower rate that implied by their public predictions. So it would actually be odd if overall the majority of predictions by these actors don’t resolve negatively and overly optimistic (they aren’t trying to rank high on the Metaculus leaderboard).
So those who are shocked by the results presented here may have cause to update and put less weight on predictions from cultured media companies and the media repeating them, and rely on something else. For those who aren’t surprised by these results, then they probably already placed an appropriate weight on how seriously to take public predictions from the industry.
On how this industry’s predictions compare to others’, I too would like to see that and identify the right reference class(es).
I think it should be pretty clear that there are a ton of biases going on. In Expert Political Judgement, there was a much earlier study on expert/pundit forecasting ability, and the results were very poor. I don’t see reasons why we should have expected different here.
One thing that might help would be “meta-forecasting”. We could later have some expert forecasters predict the accuracy of average statements made by different groups in different domains. I’d predict that they would have given pretty poor scores to most of these groups, especially “companies making public claims about their own technologies”, and “magazines and public media” (which also seem just as biased).
One thing that might help would be “meta-forecasting”. We could later have some expert forecasters predict the accuracy of average statements made by different groups in different domains. I’d predict that they would have given pretty poor scores to most of these groups.
As a former cultured meat scientist, I think these predictions have been off in large part because the core technical problems are way harder than most people know (or would care to admit). However, I also suspect that forecasts for many other deep tech sectors, even ones that have been quite successful (e.g. space), have not fared any better. I’d be curious to see how cultured meat predictions have done relative to plant-based meat, algal biofuels, rocketry, and maybe others.
There is also the interesting thing that, as far as I can tell, New Harvest, founded in 2004, basically failed, and we had to wait until the Good Food Institute came to push things along in 2016.
(as a point of comparison, New Harvest claims to have raised ~$7.5M in its frontpage (presumably during the whole of its existence), whereas the GFI spent $8.9M in 2019 alone)
This could be in part because GFI got more financial support from the EA community, both from Open Phil and due to ACE.
2012: ACE was founded.
2014: ACE did an exploratory review of New Harvest.
2015: Lewis Bollard joined Open Phil in September to start its grantmaking in animal welfare. New Harvest was named a standout charity by ACE at the end of 2015.
2016: GFI is founded. Open Phil made its first animal welfare grants. GFI received its first grant from Open Phil, of $1M. GFI become an ACE top charity at the end of the year.
2017: Open Phil made another grant to GFI, of $1.5M. New Harvest was no longer recommended by ACE at the end of the year.
2019: Open Phil made another grant to GFI, of $4M.
New Harvest never received any grants from Open Phil.
Basically, it’s possible New Harvest failed because it was never really given much of a chance.
That being said, that doesn’t mean there weren’t reasons to support GFI over New Harvest in the first place. Some are discussed here.
I’ve also wondered what reasons there might be for the apparent discrepancy between these predictions and reality. I feel like the point re technical problems you emphasised is probably among the most important ones. My first thought was a different one, though: wishful thinking. Perhaps wishful thinking re clean meat timelines is an important factor for explaining the apparently bad track record of pertinent predictions. My rationale for wishful thinking potentially being an important explanation is that, in my impression, clean meat, even more so than many other technologies, is tied very closely/viscerally to something – factory farming – a considerable share (I’d guess?) of people working on it deem a moral catastrophe.
I don’t think Anders Sandberg uses the EA Forum, so I’ll just repost what Anders wrote in reaction to this on Twitter:
”I suspect we have a “publication bias” of tech predictions where the pessimists don’t make predictions (think the tech impossible or irrelevant, hence don’t respond to queries, or find their long timescales so uncertain they are loath to state them).
In this case it is fairly clear that progress is being made but it is slower than hoped for: predictions as a whole made a rate mistake, but perhaps not an eventual outcome mistake (we will see). I think this is is[sic] a case of Amara’s law.
Amara’s law (that we overestimate the magnitude of short-term change and underestimate long-term change) can be explained by exponential-blindness, but also hype cycles, and integrating a technology in society is a slow process”
Fwiw, I broadly agree. I think those in the industry making public predictions have plausibly “good” reasons to skew optimistic. Attracting the funding, media attention, talent necessary to make progress might simply require generating buzz and optimism- even if the progress it generates is at a slower rate that implied by their public predictions. So it would actually be odd if overall the majority of predictions by these actors don’t resolve negatively and overly optimistic (they aren’t trying to rank high on the Metaculus leaderboard).
So those who are shocked by the results presented here may have cause to update and put less weight on predictions from cultured media companies and the media repeating them, and rely on something else. For those who aren’t surprised by these results, then they probably already placed an appropriate weight on how seriously to take public predictions from the industry.
On how this industry’s predictions compare to others’, I too would like to see that and identify the right reference class(es).
I think it should be pretty clear that there are a ton of biases going on. In Expert Political Judgement, there was a much earlier study on expert/pundit forecasting ability, and the results were very poor. I don’t see reasons why we should have expected different here.
One thing that might help would be “meta-forecasting”. We could later have some expert forecasters predict the accuracy of average statements made by different groups in different domains. I’d predict that they would have given pretty poor scores to most of these groups, especially “companies making public claims about their own technologies”, and “magazines and public media” (which also seem just as biased).
I agree with your meta-meta-forecast.