I think less selective quotation makes the line of argument clear.
Continuing the first quote:
The scenario in the short story is not the median forecast for any AI futures author, and none of the AI2027 authors actually believe that 2027 is the median year for a singularity to happen. But the argument they make is that 2027 is a plausible year,and they back it up with images of sophisticated looking modelling like the following:
[img]
This combination of compelling short story and seemingly-rigorous research may have been the secret sauce that let the article to go viral and be treated as a serious project:
[quote]
Now, I was originally happy to dismiss this work and just wait for their predictions to fail, but this thing just keeps spreading, including a youtube video with millions of views. So I decided to actually dig into the model and the code, and try to understand what the authors were saying and what evidence they were using to back it up.
The article is huge, so I focussed on one section alone: their “timelines forecast” code and accompanying methodology section. Not to mince words, I think it’s pretty bad. It’s not just that I disagree with their parameter estimates, it’s that I think the fundamental structure of their model is highly questionable and at times barely justified, there is very little empirical validation of the model, and there are parts of the code that the write-up of the model straight up misrepresents.
So the summary of this would not be ”… and so I think AI 2027 is a bit less plausible than the authors do”, but something like: “I think the work motivating AI 2027 being a credible scenario is, in fact, not good, and should not persuade those who did not believe this already. It is regrettable this work is being publicised (and perhaps presented) as much stronger than it really is.”
Continuing the second quote:
What I’m most against is people taking shoddy toy models seriously and basing life decisions on them, as I have seen happen for AI2027. This is just a model for a tiny slice of the possibility space for how AI will go, and in my opinion it is implemented poorly even if you agree with the author’s general worldview.
The right account for decision making under (severe) uncertainty is up for grabs, but in the ‘make a less shoddy toy model’ approach the quote would urge having a wide ensemble of different ones (including, say, those which are sub-exponential, ‘hit the wall’ or whatever else), and further urge we should put very little weight on the AI2027 model in whatever ensemble we will be using for important decisions.
Titotal actually ended their post with an alternative prescription:
I think people are going to deal with the fact that it’s really difficult to predict how a technology like AI is going to turn out. The massive blobs of uncertainty shown in AI 2027 are still severe underestimates of the uncertainty involved. If your plans for the future rely on prognostication, and this is the standard of work you are using, I think your plans are doomed. I would advise looking into plans that are robust to extreme uncertainty in how AI actually goes, and avoid actions that could blow up in your face if you turn out to be badly wrong.
I would advise looking into plans that are robust to extreme uncertainty in how AI actually goes, and avoid actions that could blow up in your face if you turn out to be badly wrong.
Seeing you highlight this now it occurs to me that I basically agree with this w.r.t. AI timelines (at least on one plausible interpretation, my guess is that titotal could have a different meaning in mind). I mostly don’t think people should take actions that blow up in their face if timelines are long (there are some exceptions, but overall I think long timelines are plausible and actions should be taken with that in mind).
A key thing that titotal doesn’t mention is how much probability mass they put on short timelines like, say, AGI by 2030. This seems very important for weighing various actions, even though we both agree that we should also be prepared for longer timelines.
In general, I feel like executing plans that are robust to extreme uncertainty is a prescription that is hard to follow without having at least a vague idea of the distribution of likelihood of various possibilities.
Thanks! This is helpful, although I would still be interested to hear if they believe there are models with “have strong conceptual justifications or empirical validation with existing data”.
I think less selective quotation makes the line of argument clear.
Continuing the first quote:
So the summary of this would not be ”… and so I think AI 2027 is a bit less plausible than the authors do”, but something like: “I think the work motivating AI 2027 being a credible scenario is, in fact, not good, and should not persuade those who did not believe this already. It is regrettable this work is being publicised (and perhaps presented) as much stronger than it really is.”
Continuing the second quote:
The right account for decision making under (severe) uncertainty is up for grabs, but in the ‘make a less shoddy toy model’ approach the quote would urge having a wide ensemble of different ones (including, say, those which are sub-exponential, ‘hit the wall’ or whatever else), and further urge we should put very little weight on the AI2027 model in whatever ensemble we will be using for important decisions.
Titotal actually ended their post with an alternative prescription:
Seeing you highlight this now it occurs to me that I basically agree with this w.r.t. AI timelines (at least on one plausible interpretation, my guess is that titotal could have a different meaning in mind). I mostly don’t think people should take actions that blow up in their face if timelines are long (there are some exceptions, but overall I think long timelines are plausible and actions should be taken with that in mind).
A key thing that titotal doesn’t mention is how much probability mass they put on short timelines like, say, AGI by 2030. This seems very important for weighing various actions, even though we both agree that we should also be prepared for longer timelines.
In general, I feel like executing plans that are robust to extreme uncertainty is a prescription that is hard to follow without having at least a vague idea of the distribution of likelihood of various possibilities.
Thanks! This is helpful, although I would still be interested to hear if they believe there are models with “have strong conceptual justifications or empirical validation with existing data”.