Thanks! I’ve read and enjoyed a number of your blog posts, and often found myself in agreement.
If you think that extinction risk this century is less than 1%, then in particular, you think that extinction risk from transformative AI is less than 1%. So, for this to be consistent, you have to believe either
a) that it’s unlikely that transformative AI will be developed at all this century,
b) that transformative AI is unlikely to lead to extinction when it is developed, e.g. because it will very likely be aligned in at least a narrow sense. (I wrote up some arguments for this a while ago.)
Which of the two do you believe to what extent? For instance, if you put 10% on transformative AI this century – which is significantly more conservative than “median EA beliefs” – then you’d have to believe that the conditional probability of extinction is less than 10%. (I’m not saying I disagree – in fact, I believe something along these lines myself.)
See my comment to nonn. I want to avoid putting numbers on those beliefs to avoid anchoring myself; but I find them both very likely—it’s not that one is much more likely than the other. (Where ‘transformative AI not developed this century’ includes ‘AI is not transformative’ in the sense that it doesn’t precipitate a new growth mode in the next century—this is certainly my mainline belief.)
What do you think about the possibility of a growth mode change (i.e. much faster pace of economic growth and probably also social change, comparable to the industrial revolution) for reasons other than AI? I feel that this is somewhat neglected in EA – would you agree with that?
Yes, I’d agree with that. There’s a lot of debate about the causes of the industrial revolution. Very few commentators point to some technological breakthrough as the cause, so it’s striking that people are inclined to point to a technological breakthrough in AI as the cause of the next growth mode transition. Instead, leading theories point to some resource overhang (‘colonies and coal’), or some innovation or change in institutions (more liberal laws and norms in England, or higher wages incentivising automation) or in culture. So perhaps there’s some novel governance system that could drive a higher growth mode, and that’ll be the decisive thing.
I’d also be interested in more details on what these beliefs imply in terms of how we can improve the long-term future. I suppose you are now more sceptical about work on AI safety as the “default” long-termist intervention. But what is the alternative? Do you think we should focus on broad improvements to civilisation, such as better governance, working towards compromise and cooperation rather than conflict / war, or generally trying to make humanity more thoughtful and cautious about new technologies and the long-term future? These are uncontroversially good but not very neglected, and it seems hard to get a lot of leverage in this way. (Then again, maybe there is no way to get extraordinary leverage over the long-term future.)
Also, if we aren’t at a particularly influential point in time regarding AI, then I think that expanding the moral circle, or otherwise advocating for “better” values, may be among the best things we can do. What are your thoughts on that?
I still think that working on AI is ultra-important — in one sense, whether there’s a 1% risk or a 20% risk doesn’t really matter; society is still extremely far from the optimum level of concern. (Similarly: “Is the right carbon tax $50 or $200?” doesn’t really matter.)
For longtermist EAs more narrowly it might matter insofar as I think it makes some other options more competitive than otherwise: especially the idea of long-term investment (whether financial or via movement-building); doing research on longtermist-relevant topics; and, like you say, perhaps doing broader x-risk reduction strategies like preventing war, better governance, trying to improve incentives so that they align better with the long-term, and so on.
There’s a lot of debate about the causes of the industrial revolution. Very few commentators point to some technological breakthrough as the cause, so it’s striking that people are inclined to point to a technological breakthrough in AI as the cause of the next growth mode transition. Instead, leading theories point to some resource overhang (‘colonies and coal’), or some innovation or change in institutions (more liberal laws and norms in England, or higher wages incentivising automation) or in culture. So perhaps there’s some novel governance system that could drive a higher growth mode, and that’ll be the decisive thing.
Strongly agree. I think it’s helpful to think about it in terms of the degree to which social and economic structures optimise for growth and innovation. Our modern systems (capitalism, liberal democracy) do reward innovation—and maybe that’s what caused the growth mode change—but we’re far away from strongly optimising for it. We care about lots of other things, and whenever there are constraints, we don’t sacrifice everything on the altar of productivity / growth / innovation. And, while you can make money by innovating, the incentive is more about innovations that are marketable in the near term, rather than maximising long-term technological progress. (Compare e.g. an app that lets you book taxis in a more convenient way vs. foundational neuroscience research.)
So, a growth mode could be triggered by any social change (culture, governance, or something else) resulting in significantly stronger optimisation pressures for long-term innovation.
That said, I don’t really see concrete ways in which this could happen and current trends do not seem to point in this direction. (I’m also not saying this would necessarily be a good thing.)
One thing that moves me towards placing a lot of importance on culture and institutions: We’ve actually had the technology and knowledge to produce greater-than-human intelligence for thousands of years, via selective breeding programs. But it’s never happened, because of taboos and incentives not working out.
Thanks! I’ve read and enjoyed a number of your blog posts, and often found myself in agreement.
See my comment to nonn. I want to avoid putting numbers on those beliefs to avoid anchoring myself; but I find them both very likely—it’s not that one is much more likely than the other. (Where ‘transformative AI not developed this century’ includes ‘AI is not transformative’ in the sense that it doesn’t precipitate a new growth mode in the next century—this is certainly my mainline belief.)
Yes, I’d agree with that. There’s a lot of debate about the causes of the industrial revolution. Very few commentators point to some technological breakthrough as the cause, so it’s striking that people are inclined to point to a technological breakthrough in AI as the cause of the next growth mode transition. Instead, leading theories point to some resource overhang (‘colonies and coal’), or some innovation or change in institutions (more liberal laws and norms in England, or higher wages incentivising automation) or in culture. So perhaps there’s some novel governance system that could drive a higher growth mode, and that’ll be the decisive thing.
I still think that working on AI is ultra-important — in one sense, whether there’s a 1% risk or a 20% risk doesn’t really matter; society is still extremely far from the optimum level of concern. (Similarly: “Is the right carbon tax $50 or $200?” doesn’t really matter.)
For longtermist EAs more narrowly it might matter insofar as I think it makes some other options more competitive than otherwise: especially the idea of long-term investment (whether financial or via movement-building); doing research on longtermist-relevant topics; and, like you say, perhaps doing broader x-risk reduction strategies like preventing war, better governance, trying to improve incentives so that they align better with the long-term, and so on.
Strongly agree. I think it’s helpful to think about it in terms of the degree to which social and economic structures optimise for growth and innovation. Our modern systems (capitalism, liberal democracy) do reward innovation—and maybe that’s what caused the growth mode change—but we’re far away from strongly optimising for it. We care about lots of other things, and whenever there are constraints, we don’t sacrifice everything on the altar of productivity / growth / innovation. And, while you can make money by innovating, the incentive is more about innovations that are marketable in the near term, rather than maximising long-term technological progress. (Compare e.g. an app that lets you book taxis in a more convenient way vs. foundational neuroscience research.)
So, a growth mode could be triggered by any social change (culture, governance, or something else) resulting in significantly stronger optimisation pressures for long-term innovation.
That said, I don’t really see concrete ways in which this could happen and current trends do not seem to point in this direction. (I’m also not saying this would necessarily be a good thing.)
One thing that moves me towards placing a lot of importance on culture and institutions: We’ve actually had the technology and knowledge to produce greater-than-human intelligence for thousands of years, via selective breeding programs. But it’s never happened, because of taboos and incentives not working out.
People didn’t quite have the relevant knowledge, since they didn’t have sound plant and animal breeding programs or predictions of inheritance.