I’ve also wondered what reasons there might be for the apparent discrepancy between these predictions and reality. I feel like the point re technical problems you emphasised is probably among the most important ones. My first thought was a different one, though: wishful thinking. Perhaps wishful thinking re clean meat timelines is an important factor for explaining the apparently bad track record of pertinent predictions. My rationale for wishful thinking potentially being an important explanation is that, in my impression, clean meat, even more so than many other technologies, is tied very closely/viscerally to something – factory farming – a considerable share (I’d guess?) of people working on it deem a moral catastrophe.
I don’t think Anders Sandberg uses the EA Forum, so I’ll just repost what Anders wrote in reaction to this on Twitter:
”I suspect we have a “publication bias” of tech predictions where the pessimists don’t make predictions (think the tech impossible or irrelevant, hence don’t respond to queries, or find their long timescales so uncertain they are loath to state them).
In this case it is fairly clear that progress is being made but it is slower than hoped for: predictions as a whole made a rate mistake, but perhaps not an eventual outcome mistake (we will see). I think this is is[sic] a case of Amara’s law.
Amara’s law (that we overestimate the magnitude of short-term change and underestimate long-term change) can be explained by exponential-blindness, but also hype cycles, and integrating a technology in society is a slow process”
Fwiw, I broadly agree. I think those in the industry making public predictions have plausibly “good” reasons to skew optimistic. Attracting the funding, media attention, talent necessary to make progress might simply require generating buzz and optimism- even if the progress it generates is at a slower rate that implied by their public predictions. So it would actually be odd if overall the majority of predictions by these actors don’t resolve negatively and overly optimistic (they aren’t trying to rank high on the Metaculus leaderboard).
So those who are shocked by the results presented here may have cause to update and put less weight on predictions from cultured media companies and the media repeating them, and rely on something else. For those who aren’t surprised by these results, then they probably already placed an appropriate weight on how seriously to take public predictions from the industry.
On how this industry’s predictions compare to others’, I too would like to see that and identify the right reference class(es).
I think it should be pretty clear that there are a ton of biases going on. In Expert Political Judgement, there was a much earlier study on expert/pundit forecasting ability, and the results were very poor. I don’t see reasons why we should have expected different here.
One thing that might help would be “meta-forecasting”. We could later have some expert forecasters predict the accuracy of average statements made by different groups in different domains. I’d predict that they would have given pretty poor scores to most of these groups, especially “companies making public claims about their own technologies”, and “magazines and public media” (which also seem just as biased).
One thing that might help would be “meta-forecasting”. We could later have some expert forecasters predict the accuracy of average statements made by different groups in different domains. I’d predict that they would have given pretty poor scores to most of these groups.
I’ve also wondered what reasons there might be for the apparent discrepancy between these predictions and reality. I feel like the point re technical problems you emphasised is probably among the most important ones. My first thought was a different one, though: wishful thinking. Perhaps wishful thinking re clean meat timelines is an important factor for explaining the apparently bad track record of pertinent predictions. My rationale for wishful thinking potentially being an important explanation is that, in my impression, clean meat, even more so than many other technologies, is tied very closely/viscerally to something – factory farming – a considerable share (I’d guess?) of people working on it deem a moral catastrophe.
I don’t think Anders Sandberg uses the EA Forum, so I’ll just repost what Anders wrote in reaction to this on Twitter:
”I suspect we have a “publication bias” of tech predictions where the pessimists don’t make predictions (think the tech impossible or irrelevant, hence don’t respond to queries, or find their long timescales so uncertain they are loath to state them).
In this case it is fairly clear that progress is being made but it is slower than hoped for: predictions as a whole made a rate mistake, but perhaps not an eventual outcome mistake (we will see). I think this is is[sic] a case of Amara’s law.
Amara’s law (that we overestimate the magnitude of short-term change and underestimate long-term change) can be explained by exponential-blindness, but also hype cycles, and integrating a technology in society is a slow process”
Fwiw, I broadly agree. I think those in the industry making public predictions have plausibly “good” reasons to skew optimistic. Attracting the funding, media attention, talent necessary to make progress might simply require generating buzz and optimism- even if the progress it generates is at a slower rate that implied by their public predictions. So it would actually be odd if overall the majority of predictions by these actors don’t resolve negatively and overly optimistic (they aren’t trying to rank high on the Metaculus leaderboard).
So those who are shocked by the results presented here may have cause to update and put less weight on predictions from cultured media companies and the media repeating them, and rely on something else. For those who aren’t surprised by these results, then they probably already placed an appropriate weight on how seriously to take public predictions from the industry.
On how this industry’s predictions compare to others’, I too would like to see that and identify the right reference class(es).
I think it should be pretty clear that there are a ton of biases going on. In Expert Political Judgement, there was a much earlier study on expert/pundit forecasting ability, and the results were very poor. I don’t see reasons why we should have expected different here.
One thing that might help would be “meta-forecasting”. We could later have some expert forecasters predict the accuracy of average statements made by different groups in different domains. I’d predict that they would have given pretty poor scores to most of these groups, especially “companies making public claims about their own technologies”, and “magazines and public media” (which also seem just as biased).
I agree with your meta-meta-forecast.