Gave a 57% probability that AGI (or similar) would not imply TAI, i.e. would not imply an effect on the world’s trajectory at least as large as the Industrial Revolution.
My impression (I could be wrong) is that this claim is interestingly contrarian among EA-minded AI researchers. I see a potential tension between how much weight you give this claim within your framework, versus how much you defer to outside views (and potentially even modest epistemology – gasp!) in the overall forecast.
I find that 57% very difficult to believe. 10% would be a stretch.
Having intelligent labor that can be quickly produced in factories (by companies that have been able to increase output by millions of times over decades), and do tasks including improving the efficiency of robots (already cheap relative to humans where we have the AI to direct them, and that before reaping economies of scale by producing billions) and solar panels (which already have energy payback times on the order of 1 year in sunny areas), along with still abundant untapped energy resources orders of magnitude greater than our current civilization taps on Earth (and a billionfold for the Solar System) makes it very difficult to make the AGI but no TAI world coherent.
Cyanobacteria can double in 6-12 hours under good conditions, mice can grow their population more than 10,000x in a year. So machinery can be made to replicate quickly, and trillions of von Neumann equivalent researcher-years (but with AI advantages) can move us further towards that from existing technology.
I predict that cashing out the given reasons into detailed descriptions will result in inconsistencies or very implausible requirements.
Thanks for these comments and for the chat earlier!
It sounds like to you, AGI means ~”human minds but better”* (maybe that’s the case for everyone who’s thought deeply about this topic, I don’t know). On the other hand, the definition I used here, “AI that can perform a significant fraction of cognitive tasks as well as any human and for no more money than it would cost for a human to do it”, falls well short of that on at least some reasonable interpretations. I definitely didn’t mean to use an unusually weak definition of AGI here (I was partly basing it on this seemingly very weak definition from Lesswrong, i.e. “a machine capable of behaving intelligently over many domains”), but maybe I did.
Under at least some interpretations of “AI that can perform a significant fraction of cognitive tasks as well as any human and for no more money than it would cost for a human to do it”, you don’t (as I understand it) think that AGI strongly implies TAI; but my impression is that you don’t think AGI under this definition is the right thing to analyse.
Given your AGI definition, I probably want to give a significantly larger probability to “AGI implies TAI” than I did in this post (though on an inside view I’m probably not in “90% seems on the low end” territory, having not thought about this enough to have that much confidence).
I probably also want to push back my AGI timelines at least a bit (e.g. by checking what AGI definitions my outside view sources were using; though I didn’t do this very thoroughly in the first place so the update might not be very large).
*I probably missed some nuance here, please feel free to clarify if so.
On the object level (I made the other comment before reading on), you write:
My impression from talking to Phil Trammell at various times is that it’s just really hard to get such high growth rates from a new technology (and I think he thinks the chance that AGI leads to >20% per year growth rates is lower than I do).
Maybe this is talking about definitions, but I’d say that “like the Industrial Revolution or bigger” doesn’t have to mean literally >20% growth / year. Things could be transformative in others ways, and eventually at least, I feel like things would accelerate almost certainly in a future controlled with or by AGI.
Edit: And I see now that you’re addressing why you feel comfortable disagreeing:
I sort of feel like other people don’t really realise / believe the above so I feel comfortable deviating from them.
I think I might have got the >20% number from Ajeya’s biological anchors report. Of course, I agree that, say, 18% growth might for 20 years might also be at least as big a deal as the Industrial Revolution. It’s just a bit easier to think about a particular growth level (for me anyway). Based on this, maybe I should give some more probability to the “high enough growth for long enough to be at least as big a deal as the Industrial Revolution” than when I was thinking just about the 20% number. (Edit: just to be clear, I did also give some (though not much) probability to non-extreme-economic-growth versions of transformative AI)
I guess this wouldn’t be a big change though so it’s probably(?) not where the disagreement comes from. E.g. if people are counting 10% growth for 10 years as at least as big a deal as the Industrial Revolution I might start thinking that the disagreement mostly comes from definitions.
I phrased my point poorly. I didn’t mean to put the emphasis on the 20% figure, but more on the notion that things will be transformative in a way that fits neatly in the economic growth framework. My concern is that any operationalization of TAI as “x% growth per year(s)” is quite narrow and doesn’t allow for scenarios where AI systems are deployed to secure influence and control over the future first. Maybe there’ll be a war and the “TAI” systems secure influence over the future by wiping out most of the economy except for a few heavily protected compute clusters and resource/production centers. Maybe AI systems are deployed as governance advisors primarily and stay out of the rest of the economy to help with beneficial regulation. And so on.
I think things will almost certainly be transformative one way or another, but if you therefore expect to always see stock market increases of >20%, or increases to other economic growth metrics, then maybe that’s thinking too narrowly. The stock market (or standard indicators of economic growth) are not what ultimately matters. Power-seeking AI systems would prioritize “influence over the long-term future” over “short-term indicators of growth”. Therefore, I’m not sure we see economic growth right when “TAI” arrives. The way I conceptualize “TAI” (and maybe this is different from other operationalizations, though, going by memory, I think it’s compatible with the way Ajeya framed it in her report, since she framed it as “capable of executing a ‘transformative task’”) is that “TAI” is certainly capable of bringing about a radical change in growth mode, eventually, but it may not necessarily be deployed to do that. I think “where’s the point of no return?” is a more important question than “Will AGI systems already transform the economy 1,2,4 years after their invention?”
That said, I don’t think the above difference in how I’d operationalize “TAI” are cruxes between us. From what you say in the writeup, it sounds like you’d be skeptical about both, that AGI systems could transform the economy(/world) directly, and that they could transform it eventually via influence-securing detours.
Thanks, this was interesting. Reading this I think maybe I have a bit of a higher bar than you re what counts as transformative (i.e. at least as big a deal as the industrial revolution). And again, just to say I did give some probability to transformative AI that didn’t act through economic growth. But the main thing that stands out to me is that I haven’t really thought all that much about what the different ways powerful AI might be transformative (as is also the case for almost everything else here too!).
I see a potential tension between how much weight you give this claim within your framework, versus how much you defer to outside views
I don’t know, for what it’s worth I feel like it’s pretty okay to have an inside view that’s in conflict with most other people’s and to still give a pretty big weight (i.e. 80%) to the outside view. (maybe this isn’t what you’re saying)
(and potentially even modest epistemology – gasp!)
Not sure I understood this, but the related statement “epistemic modesty implies Ben should give more than 80% weight to the outside view” seems reasonable. Actually maybe you’re saying “your inside view is so contrarian that it is very inside view-y, which suggests you should put more weight on the outside view than would otherwise be the case”, maybe I can sort of see that.
My understanding is that Lukas’s observation is more like:
At some points (e.g. P(AGI) timelines) you seem to give a lot of weight to (what you call) outside views and/or seem to be moved by ‘modest epistemology’.
But for P(TAI|AGI) your bottom line is very different from what most people in the community seem to think. This suggests you’re not updating much toward their view, and so don’t use “outside views”/modest epistemology here.
These suggest you’re using a different balance of sticking with your inside view vs. updating toward others for different questions/parameters. This does not need to be a problem, but it at least raises the question of why.
Yes, that’s what I meant. And FWIW, I wasn’t sure whether Ben was using modest epistemology (in my terminology, outside-view reasoning isn’t necessarily modest epistemology), but there were some passages in the original post that suggest low discrimination on how to construct the reference class. E.g., “10% on short timelines people” and “10% on long timelines people” suggests that one is simply including the sorts of timeline credences that happen to be around, without trying to evaluate people’s reasoning competence. For contrast, imagine wording things like this:
“10% credence each to persons A and B, who both appear to be well-informed on this topic and whose interestingly different reasoning styles both seem defensible to me, in the sense that I can’t confidently point out why one of them is better than the other.”
But for P(TAI|AGI) your bottom line is very different from what most people in the community seem to think
Ah right, I get the point now, thanks. I suppose my P(TAI|AGI) is supposed to be my inside view as opposed to my all-things-considered view, because I’m using it only for the inside view part of the process. The only things that are supposed to be all-things-considered views are things that come out of this long procedure I describe (i.e. the TAI and AGI timelines). But probably this wasn’t very clear.
My impression (I could be wrong) is that this claim is interestingly contrarian among EA-minded AI researchers. I see a potential tension between how much weight you give this claim within your framework, versus how much you defer to outside views (and potentially even modest epistemology – gasp!) in the overall forecast.
I find that 57% very difficult to believe. 10% would be a stretch.
Having intelligent labor that can be quickly produced in factories (by companies that have been able to increase output by millions of times over decades), and do tasks including improving the efficiency of robots (already cheap relative to humans where we have the AI to direct them, and that before reaping economies of scale by producing billions) and solar panels (which already have energy payback times on the order of 1 year in sunny areas), along with still abundant untapped energy resources orders of magnitude greater than our current civilization taps on Earth (and a billionfold for the Solar System) makes it very difficult to make the AGI but no TAI world coherent.
Cyanobacteria can double in 6-12 hours under good conditions, mice can grow their population more than 10,000x in a year. So machinery can be made to replicate quickly, and trillions of von Neumann equivalent researcher-years (but with AI advantages) can move us further towards that from existing technology.
I predict that cashing out the given reasons into detailed descriptions will result in inconsistencies or very implausible requirements.
Thanks for these comments and for the chat earlier!
It sounds like to you, AGI means ~”human minds but better”* (maybe that’s the case for everyone who’s thought deeply about this topic, I don’t know). On the other hand, the definition I used here, “AI that can perform a significant fraction of cognitive tasks as well as any human and for no more money than it would cost for a human to do it”, falls well short of that on at least some reasonable interpretations. I definitely didn’t mean to use an unusually weak definition of AGI here (I was partly basing it on this seemingly very weak definition from Lesswrong, i.e. “a machine capable of behaving intelligently over many domains”), but maybe I did.
Under at least some interpretations of “AI that can perform a significant fraction of cognitive tasks as well as any human and for no more money than it would cost for a human to do it”, you don’t (as I understand it) think that AGI strongly implies TAI; but my impression is that you don’t think AGI under this definition is the right thing to analyse.
Given your AGI definition, I probably want to give a significantly larger probability to “AGI implies TAI” than I did in this post (though on an inside view I’m probably not in “90% seems on the low end” territory, having not thought about this enough to have that much confidence).
I probably also want to push back my AGI timelines at least a bit (e.g. by checking what AGI definitions my outside view sources were using; though I didn’t do this very thoroughly in the first place so the update might not be very large).
*I probably missed some nuance here, please feel free to clarify if so.
On the object level (I made the other comment before reading on), you write:
Maybe this is talking about definitions, but I’d say that “like the Industrial Revolution or bigger” doesn’t have to mean literally >20% growth / year. Things could be transformative in others ways, and eventually at least, I feel like things would accelerate almost certainly in a future controlled with or by AGI.
Edit: And I see now that you’re addressing why you feel comfortable disagreeing:
I’m not sure about that. :)
I think I might have got the >20% number from Ajeya’s biological anchors report. Of course, I agree that, say, 18% growth might for 20 years might also be at least as big a deal as the Industrial Revolution. It’s just a bit easier to think about a particular growth level (for me anyway). Based on this, maybe I should give some more probability to the “high enough growth for long enough to be at least as big a deal as the Industrial Revolution” than when I was thinking just about the 20% number. (Edit: just to be clear, I did also give some (though not much) probability to non-extreme-economic-growth versions of transformative AI)
I guess this wouldn’t be a big change though so it’s probably(?) not where the disagreement comes from. E.g. if people are counting 10% growth for 10 years as at least as big a deal as the Industrial Revolution I might start thinking that the disagreement mostly comes from definitions.
I phrased my point poorly. I didn’t mean to put the emphasis on the 20% figure, but more on the notion that things will be transformative in a way that fits neatly in the economic growth framework. My concern is that any operationalization of TAI as “x% growth per year(s)” is quite narrow and doesn’t allow for scenarios where AI systems are deployed to secure influence and control over the future first. Maybe there’ll be a war and the “TAI” systems secure influence over the future by wiping out most of the economy except for a few heavily protected compute clusters and resource/production centers. Maybe AI systems are deployed as governance advisors primarily and stay out of the rest of the economy to help with beneficial regulation. And so on.
I think things will almost certainly be transformative one way or another, but if you therefore expect to always see stock market increases of >20%, or increases to other economic growth metrics, then maybe that’s thinking too narrowly. The stock market (or standard indicators of economic growth) are not what ultimately matters. Power-seeking AI systems would prioritize “influence over the long-term future” over “short-term indicators of growth”. Therefore, I’m not sure we see economic growth right when “TAI” arrives. The way I conceptualize “TAI” (and maybe this is different from other operationalizations, though, going by memory, I think it’s compatible with the way Ajeya framed it in her report, since she framed it as “capable of executing a ‘transformative task’”) is that “TAI” is certainly capable of bringing about a radical change in growth mode, eventually, but it may not necessarily be deployed to do that. I think “where’s the point of no return?” is a more important question than “Will AGI systems already transform the economy 1,2,4 years after their invention?”
That said, I don’t think the above difference in how I’d operationalize “TAI” are cruxes between us. From what you say in the writeup, it sounds like you’d be skeptical about both, that AGI systems could transform the economy(/world) directly, and that they could transform it eventually via influence-securing detours.
Thanks, this was interesting. Reading this I think maybe I have a bit of a higher bar than you re what counts as transformative (i.e. at least as big a deal as the industrial revolution). And again, just to say I did give some probability to transformative AI that didn’t act through economic growth. But the main thing that stands out to me is that I haven’t really thought all that much about what the different ways powerful AI might be transformative (as is also the case for almost everything else here too!).
I don’t know, for what it’s worth I feel like it’s pretty okay to have an inside view that’s in conflict with most other people’s and to still give a pretty big weight (i.e. 80%) to the outside view. (maybe this isn’t what you’re saying)
Not sure I understood this, but the related statement “epistemic modesty implies Ben should give more than 80% weight to the outside view” seems reasonable. Actually maybe you’re saying “your inside view is so contrarian that it is very inside view-y, which suggests you should put more weight on the outside view than would otherwise be the case”, maybe I can sort of see that.
My understanding is that Lukas’s observation is more like:
At some points (e.g. P(AGI) timelines) you seem to give a lot of weight to (what you call) outside views and/or seem to be moved by ‘modest epistemology’.
But for P(TAI|AGI) your bottom line is very different from what most people in the community seem to think. This suggests you’re not updating much toward their view, and so don’t use “outside views”/modest epistemology here.
These suggest you’re using a different balance of sticking with your inside view vs. updating toward others for different questions/parameters. This does not need to be a problem, but it at least raises the question of why.
Yes, that’s what I meant. And FWIW, I wasn’t sure whether Ben was using modest epistemology (in my terminology, outside-view reasoning isn’t necessarily modest epistemology), but there were some passages in the original post that suggest low discrimination on how to construct the reference class. E.g., “10% on short timelines people” and “10% on long timelines people” suggests that one is simply including the sorts of timeline credences that happen to be around, without trying to evaluate people’s reasoning competence. For contrast, imagine wording things like this:
“10% credence each to persons A and B, who both appear to be well-informed on this topic and whose interestingly different reasoning styles both seem defensible to me, in the sense that I can’t confidently point out why one of them is better than the other.”
Thanks, this was helpful as an example of one way I might improve this process.
Ah right, I get the point now, thanks. I suppose my P(TAI|AGI) is supposed to be my inside view as opposed to my all-things-considered view, because I’m using it only for the inside view part of the process. The only things that are supposed to be all-things-considered views are things that come out of this long procedure I describe (i.e. the TAI and AGI timelines). But probably this wasn’t very clear.