This essay presents itself as a counterpoint to: “AI leaders have predicted that it will enable dramatic scientific progress: curing cancer, doubling the human lifespan, colonizing space, and achieving a century of progress in the next decade.”
But this essay is talking about “AI that is very much like the LLMs of July 2025” whereas those “AI leaders” are talking about “future AI that is very very different from the LLMs of July 2025”.
Of course, we can argue about whether future AI will in fact be very very different from the LLMs of July 2025, or not. And if so, we can argue about exactly how far into the future that will happen.
But as written, this essay is not a response to those “AI leaders”, but rather a completely different topic. (…Which is fine! It’s still a topic worth discussing! It’s just that the intro and framing are misleading.)
[…also copied this to a comment on the OP substack]
This essay presents itself as a counterpoint to: “AI leaders have predicted that it will enable dramatic scientific progress: curing cancer, doubling the human lifespan, colonizing space, and achieving a century of progress in the next decade.”
But this essay is talking about “AI that is very much like the LLMs of July 2025” whereas those “AI leaders” are talking about “future AI that is very very different from the LLMs of July 2025”.
Of course, we can argue about whether future AI will in fact be very very different from the LLMs of July 2025, or not. And if so, we can argue about exactly how far into the future that will happen.
But as written, this essay is not a response to those “AI leaders”, but rather a completely different topic. (…Which is fine! It’s still a topic worth discussing! It’s just that the intro and framing are misleading.)
[…also copied this to a comment on the OP substack]
Thank you for the feedback—I’ve updated the intro for clarity