I am roughly in agreement with this post by an AI expert responding to the other (less good) short- timeline article going around.
This post just points out that the AI 2027 article is an attempt to flesh out a particular scenario, rather than an argument for short timelines, which the authors of AI 2027 would agree with.
I thought instead of critiquing the parts that I’m not an expert in, I might take a look at the part of this post that intersects with my field, when you mention material science discovery, and pour just a little bit of cold water on it.
So, an important thing to note is that this was not an LLM (neither was alphafold), but a specially designed deep learning model for generating candidate material structures.
Yes, I explicitly wanted to point out that AI can be useful to science beyond LLMs.
I covered a bit about them in my last article, this is a nice bit of evidence for their usefulness. The possibility space for new materials is ginormous and humans are not that good at generating new ones: the paper showed that this tool boosted productivity by making that process significantly easier. I don’t like how the paper described this as “idea generation”: it evokes the idea that the AI is making it’s own newtonian flashes of scientific insight, but actually it’s just mass generating candidate materials that an experienced professional can sift through.
I agree it’s not having flashes of insight, but I also think people under-estimate how useful brute force problem solving could be. I expect AI to become useful to science well before it has ‘novel insights’ in the way we imagine genius humans to have them.
I think your quoted statement is technically true, but it’s worth mentioning that the 80% faster figure was just for the people previously in the top decile of performance (ie the best researchers), for people who were not performing well there was not evidence of a real difference.
I do say it increased the productivity of ‘top’ researchers, and it’s also clarified through the link. (To my mind, it makes it more impressive, since it was adding value even to the best researchers.)
In practice the effect of the tool on progress was less than this: it was plausibly attributed to increasing the number of new patents at a firm by roughly 40%, and increasing the number of actual prototypes by 20%. You can also see that the productivity is not continuing to increase: they got their boost from the improved generation pipeline, and now the bottleneck is somewhere else.
20% more prototypes and 40% new patents sounds pretty meaningful.
I was just trying to illustrate that AI is already starting to contribute to scientific productivity in the near-term.
Productivity won’t continually increase until something more like a fully automated-scientist is created (which we clearly don’t already have).
To be clear, this is still great, and a clear deep learning success story, but it’s not really in line with colonizing the mars in 2035 or whatever the ASI people are saying now.
I’m not sure I follow. No-one is claiming that AI can already do these things – the claim is that if progress continues, then you could reach a point where AI accelerates AI research, and from there you get to something like ASI, and from there space colonisation. To argue against that you need to show the rate of progress is insufficient to get there.
As I said, I don’t think your statement was wrong, but I want to give people a more accurate perception as to how AI is currently affecting scientific progress: it’s very useful, but only in niches which align nicely with the strengths of neural networks. I do not think similar AI would produce similarly impressive results in what my team is doing, because we already have more ideas than we have the time and resources to execute on.
I can’t really assess how much speedup we could get from a superintelligence, because superintelligences don’t exist yet and may never exist. I do think that 3xing research output with AI in science is an easier proposition than building digital super-einstein, so I expect to see the former before the latter.
Thank you!
This post just points out that the AI 2027 article is an attempt to flesh out a particular scenario, rather than an argument for short timelines, which the authors of AI 2027 would agree with.
Yes, I explicitly wanted to point out that AI can be useful to science beyond LLMs.
I agree it’s not having flashes of insight, but I also think people under-estimate how useful brute force problem solving could be. I expect AI to become useful to science well before it has ‘novel insights’ in the way we imagine genius humans to have them.
I do say it increased the productivity of ‘top’ researchers, and it’s also clarified through the link. (To my mind, it makes it more impressive, since it was adding value even to the best researchers.)
20% more prototypes and 40% new patents sounds pretty meaningful.
I was just trying to illustrate that AI is already starting to contribute to scientific productivity in the near-term.
Productivity won’t continually increase until something more like a fully automated-scientist is created (which we clearly don’t already have).
I’m not sure I follow. No-one is claiming that AI can already do these things – the claim is that if progress continues, then you could reach a point where AI accelerates AI research, and from there you get to something like ASI, and from there space colonisation. To argue against that you need to show the rate of progress is insufficient to get there.
As I said, I don’t think your statement was wrong, but I want to give people a more accurate perception as to how AI is currently affecting scientific progress: it’s very useful, but only in niches which align nicely with the strengths of neural networks. I do not think similar AI would produce similarly impressive results in what my team is doing, because we already have more ideas than we have the time and resources to execute on.
I can’t really assess how much speedup we could get from a superintelligence, because superintelligences don’t exist yet and may never exist. I do think that 3xing research output with AI in science is an easier proposition than building digital super-einstein, so I expect to see the former before the latter.