Throughout this post, we use “AGI” to refer to AI systems with broad generality and high capability—roughly levels 2 to 4 on Google DeepMind’s Levels of AGI framework (p. 5).
This post seems to rely on the premise that there will be a large time gap between AGI and ASI, or DeepMind’s capability levels 4 and 5 (“at least 99th percentile of skilled adults” vs. “outperforms 100% of humans”). Unless society deliberately decides to stop AI development, it seems unlikely that there would be large gap between AGI and ASI. ASI would render most/all of the identified bottlenecks irrelevant, e.g. “regulators” and “political opposition” become meaningless in the face of superintelligence.
Even if AI is “merely” as smart as the 99th percentile human, once AI has the ability to do 99th-percentile work for very cheap with arbitrarily many copies in parallel, it seems likely that the political and governmental system as we know it would cease to exist. At minimum, we’d see close to a 100% unemployment rate. It seems very hard to make claims like “political opposition would slow down cultivated meat” when you’re talking about a world with 100% unemployment.
This report is not alone in taking this perspective. A big problem I see with a lot of these kinds of analyses (especially in the animal welfare space) is that they are trying to analyze a world where AI is better at everything than the majority of humans, and yet the political/social/economic environment is basically unchanged. I don’t see how that would happen.
Thanks for your comment. I agree it’s possible that ASI could come shortly after AGI, and I do caveat in the piece that if you believe this, most of the takeaways won’t hold.
What I wanted to do with this post wasn’t necessarily persuade people of any one scenario, but instead describe the actual bottlenecks that cultivated meat faces so that people can calibrate their own views, whatever those views are, against the real landscape. For example, if someone came away from reading this more optimistic about cultivated meat under AGI, but also better able to articulate why (according to how they think AGI solves the bottlenecks), I think that’s still a valuable outcome.
I used a narrow definition of AGI because I think that’s where actionable analysis can be made [edit to add: and I think its not a completely implausible scenario – see my reply below], but I agree its not necessarily enough. If you have recommendations for how to reason about worlds where current baselines genuinely don’t extrapolate at all, I’d really welcome them! It’s a problem I find really hard, and I think a lot of others, especially those coming from cause areas outside of AI safety, do too.
I haven’t figured out how to organize my thoughts well, so forgive me if this is unclear or disjointed.
I used a narrow definition of AGI because I think that’s where actionable analysis can be made
Even assuming AI reaches your narrow definition of AGI and then stops advancing, AGI would still radically change the economic and political environment. The bits in this post about behavior of regulators become irrelevant if regulators are replaced by AI.
I don’t think it’s fair to say (e.g.) “it’s hard to predict how politics/government will change, so I’ll just assume they won’t.” Predicting that nothing will change is still a prediction! If you’re going to take the route of not making predictions where the future is especially fuzzy, then I think that’s more defensible, in which case the answers on “Regulators speed up”, “Political opposition reverses”, and “Consumers accept cultivated meat” should all be something like “it’s too difficult to predict because AGI will radically change the political and cultural environment”.
If you write an analysis that only makes sense in an implausible world—the one where AGI accelerates technology, but nothing else happens—then that still has some positive value as long as that world has a non-zero chance of obtaining. But the framing of the piece directs animal activists to focus their attention on that world, and I think that’s the wrong world to focus on because it’s so improbable. At minimum, the piece doesn’t make it clear that this is an assumption.
If you have recommendations for how to reason about worlds where current baselines genuinely don’t extrapolate at all, I’d really welcome them!
Here’s what I would say: Yes, some things, like how society will be structured post-AGI, are very hard to predict. But other things about AGI are predictable. I can predict that it speeds up almost all kinds of work. I can predict that AGI will control the shape of the future—either because it has explicit control, or because humans retain control but still rely on AGI to do most of the work (because AGI is better than humans at almost all tasks). I can predict that, on our current trajectory, ASI will follow shortly after AGI (see AI As Profoundly Abnormal Technology). I can predict that if ASI is misaligned, then it will wipe out all life on earth.
There is an argument that animal activists should advocate to pause AI development because (1) probably after AGI and certainly after ASI, animal activists will have no power; and (2) it is too difficult to predict whether AGI/ASI would be good for animals, so we need to buy more time to figure out how to make it good. This is an argument from conservatism rather than an argument from expected value; I still think AI pause advocacy has high expected value, but that argument is harder to make. (I’m currently writing something longer about why pausing AI is a good idea even setting aside the alignment problem, although my argument isn’t just about animal welfare.)
Another important question is how to shape the values of AGI (about half the ideas in this list are some version of that). There are things we can do now to influence the direction of AI development.
Another way to reason about post-AGI worlds is to think about AI alignment. e.g. I have three recent posts about what forms of alignment are more likely to be good for animals (1, 2, 3). A big thing that I think is under-appreciated by animal activists is that the way AI is aligned—if it’s aligned—could have a big effect. I don’t just mean “will care-for-animals be built in to its values”, but that there are differences between how different types of alignment are supposed to work, e.g. constitutional AI vs. agent foundations (see 1).
Speaking of “if it’s aligned”, maybe the best thing to do is to work directly on AI alignment. I wrote a relevant post here. I believe a good number of AI safety researchers think that solving the alignment problem is the best way to help animals, although I can’t recall any piece of writing where someone clearly articulated this position.
ETA: More on this bit:
[W]e don’t address scenarios in which AGI drastically reshapes institutional and political dynamics. [...] [W]e focus on existing institutional structures because they allow actionable analysis, but we acknowledge this is a limitation.
Rather than predicting that AGI won’t change institutional structures, to allow actionable analysis I could instead say:
My assumption is that first AGI will become a world government singleton, and its values will be determined by its constitution.
This scenario is both easier to analyze (you can ignore political and regulatory factors and just focus on the text content of the AI constitution) and more likely to actually happen (although I still think it’s pretty unlikely).
I definitely take your point about “I used a narrow definition of AGI because I think that’s where actionable analysis can be made, but I agree its not necessarily enough.” – I think I could have worded that better.
What I meant was that I think the world I discuss is plausible and we can get some actionable analysis from it, which can get us some way to identifying what actions may be more robust across different scenarios. (I agree we wouldn’t want to discuss scenarios that are impossible.)
It seems the difference in our views here is that I think it’s possible institutions and consumer preferences are quite sticky, at least for a little while; e.g. society imposes that humans have to be final decision-makers for longer than you’d expect (perhaps because legacy rules persist or people strongly prefer slower human-in-the-loop processes, or something else), or consumers really want ‘traditional’ food that they know, like, and trust regardless of their economic position. If you think there’s a 0% chance that can happen, then it makes sense not to agree with what I describe above.
I probably won’t carry on replying here, but I do appreciate you taking the time to explain your view, it made me think about the framing of my post and my viewpoint a lot more.
This post seems to rely on the premise that there will be a large time gap between AGI and ASI, or DeepMind’s capability levels 4 and 5 (“at least 99th percentile of skilled adults” vs. “outperforms 100% of humans”). Unless society deliberately decides to stop AI development, it seems unlikely that there would be large gap between AGI and ASI. ASI would render most/all of the identified bottlenecks irrelevant, e.g. “regulators” and “political opposition” become meaningless in the face of superintelligence.
Even if AI is “merely” as smart as the 99th percentile human, once AI has the ability to do 99th-percentile work for very cheap with arbitrarily many copies in parallel, it seems likely that the political and governmental system as we know it would cease to exist. At minimum, we’d see close to a 100% unemployment rate. It seems very hard to make claims like “political opposition would slow down cultivated meat” when you’re talking about a world with 100% unemployment.
This report is not alone in taking this perspective. A big problem I see with a lot of these kinds of analyses (especially in the animal welfare space) is that they are trying to analyze a world where AI is better at everything than the majority of humans, and yet the political/social/economic environment is basically unchanged. I don’t see how that would happen.
Thanks for your comment. I agree it’s possible that ASI could come shortly after AGI, and I do caveat in the piece that if you believe this, most of the takeaways won’t hold.
What I wanted to do with this post wasn’t necessarily persuade people of any one scenario, but instead describe the actual bottlenecks that cultivated meat faces so that people can calibrate their own views, whatever those views are, against the real landscape. For example, if someone came away from reading this more optimistic about cultivated meat under AGI, but also better able to articulate why (according to how they think AGI solves the bottlenecks), I think that’s still a valuable outcome.
I used a narrow definition of AGI because I think that’s where actionable analysis can be made [edit to add: and I think its not a completely implausible scenario – see my reply below], but I agree its not necessarily enough. If you have recommendations for how to reason about worlds where current baselines genuinely don’t extrapolate at all, I’d really welcome them! It’s a problem I find really hard, and I think a lot of others, especially those coming from cause areas outside of AI safety, do too.
I haven’t figured out how to organize my thoughts well, so forgive me if this is unclear or disjointed.
Even assuming AI reaches your narrow definition of AGI and then stops advancing, AGI would still radically change the economic and political environment. The bits in this post about behavior of regulators become irrelevant if regulators are replaced by AI.
I don’t think it’s fair to say (e.g.) “it’s hard to predict how politics/government will change, so I’ll just assume they won’t.” Predicting that nothing will change is still a prediction! If you’re going to take the route of not making predictions where the future is especially fuzzy, then I think that’s more defensible, in which case the answers on “Regulators speed up”, “Political opposition reverses”, and “Consumers accept cultivated meat” should all be something like “it’s too difficult to predict because AGI will radically change the political and cultural environment”.
If you write an analysis that only makes sense in an implausible world—the one where AGI accelerates technology, but nothing else happens—then that still has some positive value as long as that world has a non-zero chance of obtaining. But the framing of the piece directs animal activists to focus their attention on that world, and I think that’s the wrong world to focus on because it’s so improbable. At minimum, the piece doesn’t make it clear that this is an assumption.
Here’s what I would say: Yes, some things, like how society will be structured post-AGI, are very hard to predict. But other things about AGI are predictable. I can predict that it speeds up almost all kinds of work. I can predict that AGI will control the shape of the future—either because it has explicit control, or because humans retain control but still rely on AGI to do most of the work (because AGI is better than humans at almost all tasks). I can predict that, on our current trajectory, ASI will follow shortly after AGI (see AI As Profoundly Abnormal Technology). I can predict that if ASI is misaligned, then it will wipe out all life on earth.
There is an argument that animal activists should advocate to pause AI development because (1) probably after AGI and certainly after ASI, animal activists will have no power; and (2) it is too difficult to predict whether AGI/ASI would be good for animals, so we need to buy more time to figure out how to make it good. This is an argument from conservatism rather than an argument from expected value; I still think AI pause advocacy has high expected value, but that argument is harder to make. (I’m currently writing something longer about why pausing AI is a good idea even setting aside the alignment problem, although my argument isn’t just about animal welfare.)
Another important question is how to shape the values of AGI (about half the ideas in this list are some version of that). There are things we can do now to influence the direction of AI development.
Another way to reason about post-AGI worlds is to think about AI alignment. e.g. I have three recent posts about what forms of alignment are more likely to be good for animals (1, 2, 3). A big thing that I think is under-appreciated by animal activists is that the way AI is aligned—if it’s aligned—could have a big effect. I don’t just mean “will care-for-animals be built in to its values”, but that there are differences between how different types of alignment are supposed to work, e.g. constitutional AI vs. agent foundations (see 1).
Speaking of “if it’s aligned”, maybe the best thing to do is to work directly on AI alignment. I wrote a relevant post here. I believe a good number of AI safety researchers think that solving the alignment problem is the best way to help animals, although I can’t recall any piece of writing where someone clearly articulated this position.
ETA: More on this bit:
Rather than predicting that AGI won’t change institutional structures, to allow actionable analysis I could instead say:
This scenario is both easier to analyze (you can ignore political and regulatory factors and just focus on the text content of the AI constitution) and more likely to actually happen (although I still think it’s pretty unlikely).
Thanks for your reply!
I definitely take your point about “I used a narrow definition of AGI because I think that’s where actionable analysis can be made, but I agree its not necessarily enough.” – I think I could have worded that better.
What I meant was that I think the world I discuss is plausible and we can get some actionable analysis from it, which can get us some way to identifying what actions may be more robust across different scenarios. (I agree we wouldn’t want to discuss scenarios that are impossible.)
It seems the difference in our views here is that I think it’s possible institutions and consumer preferences are quite sticky, at least for a little while; e.g. society imposes that humans have to be final decision-makers for longer than you’d expect (perhaps because legacy rules persist or people strongly prefer slower human-in-the-loop processes, or something else), or consumers really want ‘traditional’ food that they know, like, and trust regardless of their economic position. If you think there’s a 0% chance that can happen, then it makes sense not to agree with what I describe above.
I probably won’t carry on replying here, but I do appreciate you taking the time to explain your view, it made me think about the framing of my post and my viewpoint a lot more.