Yep, that’s admittedly a risk of a framework like this. We’ve tried our best to not to make that mistake, and have gone to some length explaining why we think we haven’t. If you disagree, please help us by telling us which disjunctive paths you think we’ve missed or which probabilities you think we’ve underestimated.
As we asked in the post:
If you disagree with our admittedly imperfect guesses, we kindly ask that you supply your own preferred probabilities (or framework modifications). It’s easier to tear down than build up, and we’d love to hear how you think this analysis can be improved.
The primary issue I guess is that the normal rules don’t easily apply here. We don’t have good past data to make predictions, so every new requirement added introduces more complexity (and chaos), which might make it less accurate than using fewer variables. Thinking in terms of “all other factors remaining, what are the odds of x” sounds less accurate, but might be the only way to avoid being consumed by all potential variables. Like, ones you don’t even mention that I could name include “US democracy breaksdown”, “AIs hack the grid”, “AIs break the internet/infect every interconnected device with malware”, etc.* You could just keep adding more requirements until your probabilities drop to near 0, because it’ll be difficult to say with much confidence that any of them are <.01 likely to occur, even though a lot of them probably are. It’s probably better just to group several constraints together, and just give a probability that one or more of them occurs (example: “chance that recession/war/regulation/other slows or halts progress”), rather than trying to assess the likelihood of each one. Ordinarily, this wouldn’t be a problem, but we don’t have any data we could normally work with.
Here’s a brief writeup of some agreements/disagreements I have with the individual contraints.
“We invent algorithms for transformative AGI”
I don’t know how this is only 60%. I’d place >.5 before 2030, let alone 2043. This is just guesswork, but we seem to be one or two breakthroughs away.
“We invent a way for AGIs to learn faster than humans 40%”
I don’t really know what this means, why it’s required, or why it’s so low. I see in the paper that it mentions humans being sequential learners that takes years, but AIs don’t seem to work that way. Imagine if GPT4 took years just to learn basic words. AIs also seem to already be able to learn faster than humans. They currently need more data, but less compute than a human brain. Computers can already process information much faster than a brain. And you don’t even need them to learn faster than humans, since once they learn a task, they can just copy that skill to all other AIs. This is a critical point. A human will spend years in Med School just because a senior in the field can’t copy their weights and send them to a grad student.
Also, I’m confused how this at .4, given that its conditional of TAI happening. If you have algorithms for TAI, why couldn’t they also invent algorithms that learn faster than humans? We already see how current AIs can improve algorithmic efficiency (as just one recent example: https://www.deepmind.com/blog/alphadev-discovers-faster-sorting-algorithms). Improving algorithms is probably one of the easiest things a TAI could do, without having to do any physical world experimentation.
“AGI inference costs drop below $25/hr (per human equivalent) 16%”
I really don’t see how this is 16%. Once an AI is able to obtain a new capability, it doesn’t seem to cost much to reuse that capability. Example: GPT4, very expensive to train, but it can be used for cents on a dollar afterward. These aren’t mechanical humans, they don’t need to go through repeated training, knowledge expertise, etc. They only need to do it once, and then it just gets copied.
And, like above, if this is conditional on TAI and faster-than-human learning occurring, how is this only at .016? A faster-than-human TAI can (very probably) improve algorithmic efficiency to radically drive down the cost.
“We invent and scale cheap, quality robots 60%”
This is one where infrastructure and regulation can bottleneck things, so I can understand at least why this is low.
“We massively scale production of chips and power 46%”
If we get TAIs, I imagine scaling will continue or else radically increase. We’re already seeing this, and current AIs have much more limited economic potential. We also don’t know if we actually need to keep scaling or not, since (as I mentioned), algorithmic efficiency might make this unimportant.
“We avoid derailment by human regulation 70%”
Maybe?
“We avoid derailment by AI-caused delay 90%”
In the paper, it describes this as “superintelligent but expensive AGI may itself warn us to slow progress, to forestall potential catastrophe that would befall both us and it.”
That’s interesting, but if the AI hasn’t coup’d humanity already, wouldn’t this just fall under ‘regulation derails TAI’? Unless there is some other way progress halts that doesn’t involve regulations or AI coups...
“We avoid derailment from wars (e.g., China invades Taiwan) 70%”
Possible, but I don’t think this would derail things for 20 years. Maybe 5.
“We avoid derailment from pandemics 90%”
Pandemics also increase with the chances of TAI (or, maybe, they go down, depending, AI could possibly detect and predict a pandemic much better). This is one of the issues with all of this, everything is so entangled, and it’s not actually that easy to say which way variables will influence each other. I’m pretty sure it’s not 50⁄50 it goes one way or the other, so it probably does greatly influence it.
“We avoid derailment from severe depressions”
Not sure, here. It’s not as though everyone will be going out and buying TPUs with or without economic worries. Not all industries slow or halt, even during a depression. Algorithmic efficiency especially seems unlikely to be affected by this.
Overall, I think the hardware and regulatory constraints are the most likely limiting factors. I’m not that sure about anything else.
*I originally wrote up another AI-related scenario, but decided it shouldn’t be publicly stated at the moment.
Yep, that’s admittedly a risk of a framework like this. We’ve tried our best to not to make that mistake, and have gone to some length explaining why we think we haven’t. If you disagree, please help us by telling us which disjunctive paths you think we’ve missed or which probabilities you think we’ve underestimated.
As we asked in the post:
The primary issue I guess is that the normal rules don’t easily apply here. We don’t have good past data to make predictions, so every new requirement added introduces more complexity (and chaos), which might make it less accurate than using fewer variables. Thinking in terms of “all other factors remaining, what are the odds of x” sounds less accurate, but might be the only way to avoid being consumed by all potential variables. Like, ones you don’t even mention that I could name include “US democracy breaksdown”, “AIs hack the grid”, “AIs break the internet/infect every interconnected device with malware”, etc.* You could just keep adding more requirements until your probabilities drop to near 0, because it’ll be difficult to say with much confidence that any of them are <.01 likely to occur, even though a lot of them probably are. It’s probably better just to group several constraints together, and just give a probability that one or more of them occurs (example: “chance that recession/war/regulation/other slows or halts progress”), rather than trying to assess the likelihood of each one. Ordinarily, this wouldn’t be a problem, but we don’t have any data we could normally work with.
Here’s a brief writeup of some agreements/disagreements I have with the individual contraints.
“We invent algorithms for transformative AGI”
I don’t know how this is only 60%. I’d place >.5 before 2030, let alone 2043. This is just guesswork, but we seem to be one or two breakthroughs away.
“We invent a way for AGIs to learn faster than humans 40%”
I don’t really know what this means, why it’s required, or why it’s so low. I see in the paper that it mentions humans being sequential learners that takes years, but AIs don’t seem to work that way. Imagine if GPT4 took years just to learn basic words. AIs also seem to already be able to learn faster than humans. They currently need more data, but less compute than a human brain. Computers can already process information much faster than a brain. And you don’t even need them to learn faster than humans, since once they learn a task, they can just copy that skill to all other AIs. This is a critical point. A human will spend years in Med School just because a senior in the field can’t copy their weights and send them to a grad student.
Also, I’m confused how this at .4, given that its conditional of TAI happening. If you have algorithms for TAI, why couldn’t they also invent algorithms that learn faster than humans? We already see how current AIs can improve algorithmic efficiency (as just one recent example: https://www.deepmind.com/blog/alphadev-discovers-faster-sorting-algorithms). Improving algorithms is probably one of the easiest things a TAI could do, without having to do any physical world experimentation.
“AGI inference costs drop below $25/hr (per human equivalent) 16%”
I really don’t see how this is 16%. Once an AI is able to obtain a new capability, it doesn’t seem to cost much to reuse that capability. Example: GPT4, very expensive to train, but it can be used for cents on a dollar afterward. These aren’t mechanical humans, they don’t need to go through repeated training, knowledge expertise, etc. They only need to do it once, and then it just gets copied.
And, like above, if this is conditional on TAI and faster-than-human learning occurring, how is this only at .016? A faster-than-human TAI can (very probably) improve algorithmic efficiency to radically drive down the cost.
“We invent and scale cheap, quality robots 60%”
This is one where infrastructure and regulation can bottleneck things, so I can understand at least why this is low.
“We massively scale production of chips and power 46%”
If we get TAIs, I imagine scaling will continue or else radically increase. We’re already seeing this, and current AIs have much more limited economic potential. We also don’t know if we actually need to keep scaling or not, since (as I mentioned), algorithmic efficiency might make this unimportant.
“We avoid derailment by human regulation 70%”
Maybe?
“We avoid derailment by AI-caused delay 90%”
In the paper, it describes this as “superintelligent but expensive AGI may itself warn us to slow progress, to forestall potential catastrophe that would befall both us and it.”
That’s interesting, but if the AI hasn’t coup’d humanity already, wouldn’t this just fall under ‘regulation derails TAI’? Unless there is some other way progress halts that doesn’t involve regulations or AI coups...
“We avoid derailment from wars (e.g., China invades Taiwan) 70%”
Possible, but I don’t think this would derail things for 20 years. Maybe 5.
“We avoid derailment from pandemics 90%”
Pandemics also increase with the chances of TAI (or, maybe, they go down, depending, AI could possibly detect and predict a pandemic much better). This is one of the issues with all of this, everything is so entangled, and it’s not actually that easy to say which way variables will influence each other. I’m pretty sure it’s not 50⁄50 it goes one way or the other, so it probably does greatly influence it.
“We avoid derailment from severe depressions”
Not sure, here. It’s not as though everyone will be going out and buying TPUs with or without economic worries. Not all industries slow or halt, even during a depression. Algorithmic efficiency especially seems unlikely to be affected by this.
Overall, I think the hardware and regulatory constraints are the most likely limiting factors. I’m not that sure about anything else.
*I originally wrote up another AI-related scenario, but decided it shouldn’t be publicly stated at the moment.