Point 1: Broad agreement with a version of the original post’s argument
Thanks for this. I think I agree with you that people in the global health and animal spaces should, at the margin, think more about the possibility of Transformative AI (TAI), and short-timeline TAI.
For animal-focussed people, maybe there’s an argument that because the default path of a non-TAI future is likely so bad for animals (eg persuading people to stop eating animals is really hard, persuading people to intervene to help wild animals is really hard, etc), that we might, actually, want to heavily “bet” on futures *with* TAI, because it’s only those futures which hold out the prospect of a big reduction in animal suffering. So we should optimise our actions for worlds where TAI happens, and try to maximise the chances that these futures go very well for non-human animals.
I think this is likely less true for global health and wellbeing, where plausibly the global trends look a lot better.
Point 2: Some reasons to be sceptical about claims of short-timeline Transformative AI
Having said that, there’s something about the apparent certainty that “TAI is nigh” in the original post, which prompted me to want to scribble down some push-back-y thoughts. Below are some plausible-sounding-to-me reasons to be sceptical about high-certainty claims that TAI is close. I don’t pretend that these lines of thought in-and-of-themselves demolish the case for short-timeline TAI, but I do think that they are worthy of consideration and discussion, and I’d be curious to hear what others make of them:
The prediction of short-timeline TAI is based on speculation about the future. Humans very often get this type of speculation wrong.
Global capital markets aren’t predicting short-timeline TAI. A lot of very bright people, who are highly incentivised to make accurate predictions about the future shape of the economy, are not betting that TAI is imminent.
Indeed, most people in the world don’t seem to think that TAI is imminent. This includes tonnes of *really* smart people, who have access to a lot of information.
There’s a rich history of very clever people making bold predictions about the future, based on reasonable-sounding assumptions and plausible chains of reasoning, which then don’t come true—e.g. Paul Ehrlich’s Population Bomb.
Sometimes, even the transhumanist community—where notions of AGI, AI catastrophe risk, etc, started out—get excited about a certain technological risk/trend, but then it turns out not to be such a big deal—e.g. nanotech, “grey goo”, etc in the ’80s and ’90s.
In the past, many radical predictions about the future, based on speculation and abstract chains of reasoning, have turned out to be wrong.
Perhaps there’s a community effect whereby we all hype ourselves up about TAI and short timelines. It’s exciting, scary and adrenaline-inducing to think that we might be about to live through ‘the end of times’.
Perhaps the meme of “TAI is just around the corner/it might kill us all” has a quality which is psychologically captivating, particularly for a certain type of mind (eg people who are into computer science, etc); perhaps this biases us. The human mind seems to be really drawn to “the end is nigh” type thinking.
Perhaps questioning the assumption of short-timeline TAI has become low-status within EA, and potentially risky in terms of reputation, funding, etc, so people are disincentivised to push back on it.
To restate: I don’t think any of these points torpedo the case for thinking that TAI is either inevitable, and/or imminent. I just think they are valid considerations when thinking about this topic, and are worthy of consideration/discussion, as we try to decide how to act in the world.
Re point 1: I agree that the likelihood and expected impact of transformative AI exist on a spectrum. I didn’t mean to imply certainty about timelines, but I chose not to focus on arguing for specific timelines in this post.
Regarding the specific points: they seem plausible but are mostly based on base rates and social dynamics. I think many people’s views, especially those working on AI, have shifted from being shaped primarily by abstract arguments to being informed by observable trends in AI capabilities and investments.
I’m not sure that the observable trends in current AI capabilities definitely point to an almost-certainty of TAI. I love using the latest LLMs, I find them amazing, and I do find it plausible that next-gen models, plus making them more agent-like, might be amazing (and scary). And I find it very, very plausible to imagine big productivity boosts in knowledge work. But the claim that this will almost-certainly lead to a rapid and complete economic/scientific transformation still feels at least a bit speculative, to me, I think...
Point 1: Broad agreement with a version of the original post’s argument
Thanks for this. I think I agree with you that people in the global health and animal spaces should, at the margin, think more about the possibility of Transformative AI (TAI), and short-timeline TAI.
For animal-focussed people, maybe there’s an argument that because the default path of a non-TAI future is likely so bad for animals (eg persuading people to stop eating animals is really hard, persuading people to intervene to help wild animals is really hard, etc), that we might, actually, want to heavily “bet” on futures *with* TAI, because it’s only those futures which hold out the prospect of a big reduction in animal suffering. So we should optimise our actions for worlds where TAI happens, and try to maximise the chances that these futures go very well for non-human animals.
I think this is likely less true for global health and wellbeing, where plausibly the global trends look a lot better.
Point 2: Some reasons to be sceptical about claims of short-timeline Transformative AI
Having said that, there’s something about the apparent certainty that “TAI is nigh” in the original post, which prompted me to want to scribble down some push-back-y thoughts. Below are some plausible-sounding-to-me reasons to be sceptical about high-certainty claims that TAI is close. I don’t pretend that these lines of thought in-and-of-themselves demolish the case for short-timeline TAI, but I do think that they are worthy of consideration and discussion, and I’d be curious to hear what others make of them:
The prediction of short-timeline TAI is based on speculation about the future. Humans very often get this type of speculation wrong.
Global capital markets aren’t predicting short-timeline TAI. A lot of very bright people, who are highly incentivised to make accurate predictions about the future shape of the economy, are not betting that TAI is imminent.
Indeed, most people in the world don’t seem to think that TAI is imminent. This includes tonnes of *really* smart people, who have access to a lot of information.
There’s a rich history of very clever people making bold predictions about the future, based on reasonable-sounding assumptions and plausible chains of reasoning, which then don’t come true—e.g. Paul Ehrlich’s Population Bomb.
Sometimes, even the transhumanist community—where notions of AGI, AI catastrophe risk, etc, started out—get excited about a certain technological risk/trend, but then it turns out not to be such a big deal—e.g. nanotech, “grey goo”, etc in the ’80s and ’90s.
In the past, many radical predictions about the future, based on speculation and abstract chains of reasoning, have turned out to be wrong.
Perhaps there’s a community effect whereby we all hype ourselves up about TAI and short timelines. It’s exciting, scary and adrenaline-inducing to think that we might be about to live through ‘the end of times’.
Perhaps the meme of “TAI is just around the corner/it might kill us all” has a quality which is psychologically captivating, particularly for a certain type of mind (eg people who are into computer science, etc); perhaps this biases us. The human mind seems to be really drawn to “the end is nigh” type thinking.
Perhaps questioning the assumption of short-timeline TAI has become low-status within EA, and potentially risky in terms of reputation, funding, etc, so people are disincentivised to push back on it.
To restate: I don’t think any of these points torpedo the case for thinking that TAI is either inevitable, and/or imminent. I just think they are valid considerations when thinking about this topic, and are worthy of consideration/discussion, as we try to decide how to act in the world.
Thanks for the thoughtful comment!
Re point 1: I agree that the likelihood and expected impact of transformative AI exist on a spectrum. I didn’t mean to imply certainty about timelines, but I chose not to focus on arguing for specific timelines in this post.
Regarding the specific points: they seem plausible but are mostly based on base rates and social dynamics. I think many people’s views, especially those working on AI, have shifted from being shaped primarily by abstract arguments to being informed by observable trends in AI capabilities and investments.
Cheers, and thanks for the thoughtful post! :)
I’m not sure that the observable trends in current AI capabilities definitely point to an almost-certainty of TAI. I love using the latest LLMs, I find them amazing, and I do find it plausible that next-gen models, plus making them more agent-like, might be amazing (and scary). And I find it very, very plausible to imagine big productivity boosts in knowledge work. But the claim that this will almost-certainly lead to a rapid and complete economic/scientific transformation still feels at least a bit speculative, to me, I think...