Thanks for your comment. I agree itâs possible that ASI could come shortly after AGI, and I do caveat in the piece that if you believe this, most of the takeaways wonât hold.
What I wanted to do with this post wasnât necessarily persuade people of any one scenario, but instead describe the actual bottlenecks that cultivated meat faces so that people can calibrate their own views, whatever those views are, against the real landscape. For example, if someone came away from reading this more optimistic about cultivated meat under AGI, but also better able to articulate why (according to how they think AGI solves the bottlenecks), I think thatâs still a valuable outcome.
I used a narrow definition of AGI because I think thatâs where actionable analysis can be made [edit to add: and I think its not a completely implausible scenario, especially for consumer preferences â see my reply below], but I agree its not necessarily enough. If you have recommendations for how to reason about worlds where current baselines genuinely donât extrapolate at all, Iâd really welcome them! Itâs a problem I find really hard, and I think a lot of others, especially those coming from cause areas outside of AI safety, do too.
I havenât figured out how to organize my thoughts well, so forgive me if this is unclear or disjointed.
I used a narrow definition of AGI because I think thatâs where actionable analysis can be made
Even assuming AI reaches your narrow definition of AGI and then stops advancing, AGI would still radically change the economic and political environment. The bits in this post about behavior of regulators become irrelevant if regulators are replaced by AI.
I donât think itâs fair to say (e.g.) âitâs hard to predict how politics/âgovernment will change, so Iâll just assume they wonât.â Predicting that nothing will change is still a prediction! If youâre going to take the route of not making predictions where the future is especially fuzzy, then I think thatâs more defensible, in which case the answers on âRegulators speed upâ, âPolitical opposition reversesâ, and âConsumers accept cultivated meatâ should all be something like âitâs too difficult to predict because AGI will radically change the political and cultural environmentâ.
If you write an analysis that only makes sense in an implausible worldâthe one where AGI accelerates technology, but nothing else happensâthen that still has some positive value as long as that world has a non-zero chance of obtaining. But the framing of the piece directs animal activists to focus their attention on that world, and I think thatâs the wrong world to focus on because itâs so improbable. At minimum, the piece doesnât make it clear that this is an assumption.
If you have recommendations for how to reason about worlds where current baselines genuinely donât extrapolate at all, Iâd really welcome them!
Hereâs what I would say: Yes, some things, like how society will be structured post-AGI, are very hard to predict. But other things about AGI are predictable. I can predict that it speeds up almost all kinds of work. I can predict that AGI will control the shape of the futureâeither because it has explicit control, or because humans retain control but still rely on AGI to do most of the work (because AGI is better than humans at almost all tasks). I can predict that, on our current trajectory, ASI will follow shortly after AGI (see AI As Profoundly Abnormal Technology). I can predict that if ASI is misaligned, then it will wipe out all life on earth.
There is an argument that animal activists should advocate to pause AI development because (1) probably after AGI and certainly after ASI, animal activists will have no power; and (2) it is too difficult to predict whether AGI/âASI would be good for animals, so we need to buy more time to figure out how to make it good. This is an argument from conservatism rather than an argument from expected value; I still think AI pause advocacy has high expected value, but that argument is harder to make. (Iâm currently writing something longer about why pausing AI is a good idea even setting aside the alignment problem, although my argument isnât just about animal welfare.)
Another important question is how to shape the values of AGI (about half the ideas in this list are some version of that). There are things we can do now to influence the direction of AI development.
Another way to reason about post-AGI worlds is to think about AI alignment. e.g. I have three recent posts about what forms of alignment are more likely to be good for animals (1, 2, 3). A big thing that I think is under-appreciated by animal activists is that the way AI is alignedâif itâs alignedâcould have a big effect. I donât just mean âwill care-for-animals be built in to its valuesâ, but that there are differences between how different types of alignment are supposed to work, e.g. constitutional AI vs. agent foundations (see 1).
Speaking of âif itâs alignedâ, maybe the best thing to do is to work directly on AI alignment. I wrote a relevant post here. I believe a good number of AI safety researchers think that solving the alignment problem is the best way to help animals, although I canât recall any piece of writing where someone clearly articulated this position.
ETA: More on this bit:
[W]e donât address scenarios in which AGI drastically reshapes institutional and political dynamics. [...] [W]e focus on existing institutional structures because they allow actionable analysis, but we acknowledge this is a limitation.
Rather than predicting that AGI wonât change institutional structures, to allow actionable analysis I could instead say:
My assumption is that first AGI will become a world government singleton, and its values will be determined by its constitution.
This scenario is both easier to analyze (you can ignore political and regulatory factors and just focus on the text content of the AI constitution) and more likely to actually happen (although I still think itâs pretty unlikely).
I definitely take your point about âI used a narrow definition of AGI because I think thatâs where actionable analysis can be made, but I agree its not necessarily enough.â â I think I could have worded that better.
What I meant was that I think the world I discuss is plausible and we can get some actionable analysis from it, which can get us some way to identifying what actions may be more robust across different scenarios. (I agree we wouldnât want to discuss scenarios that are impossible.)
It seems the difference in our views here is that I think itâs possible institutions and consumer preferences are quite sticky, at least for a little while; e.g. society imposes that humans have to be final decision-makers for longer than youâd expect (perhaps because legacy rules persist or people strongly prefer slower human-in-the-loop processes, or something else), or consumers really want âtraditionalâ food that they know, like, and trust regardless of their economic position. If you think thereâs a 0% chance that can happen, then it makes sense not to agree with what I describe above.
I probably wonât carry on replying here, but I do appreciate you taking the time to explain your view, it made me think about the framing of my post and my viewpoint a lot more.
Thanks for your comment. I agree itâs possible that ASI could come shortly after AGI, and I do caveat in the piece that if you believe this, most of the takeaways wonât hold.
What I wanted to do with this post wasnât necessarily persuade people of any one scenario, but instead describe the actual bottlenecks that cultivated meat faces so that people can calibrate their own views, whatever those views are, against the real landscape. For example, if someone came away from reading this more optimistic about cultivated meat under AGI, but also better able to articulate why (according to how they think AGI solves the bottlenecks), I think thatâs still a valuable outcome.
I used a narrow definition of AGI because I think thatâs where actionable analysis can be made [edit to add: and I think its not a completely implausible scenario, especially for consumer preferences â see my reply below], but I agree its not necessarily enough. If you have recommendations for how to reason about worlds where current baselines genuinely donât extrapolate at all, Iâd really welcome them! Itâs a problem I find really hard, and I think a lot of others, especially those coming from cause areas outside of AI safety, do too.
I havenât figured out how to organize my thoughts well, so forgive me if this is unclear or disjointed.
Even assuming AI reaches your narrow definition of AGI and then stops advancing, AGI would still radically change the economic and political environment. The bits in this post about behavior of regulators become irrelevant if regulators are replaced by AI.
I donât think itâs fair to say (e.g.) âitâs hard to predict how politics/âgovernment will change, so Iâll just assume they wonât.â Predicting that nothing will change is still a prediction! If youâre going to take the route of not making predictions where the future is especially fuzzy, then I think thatâs more defensible, in which case the answers on âRegulators speed upâ, âPolitical opposition reversesâ, and âConsumers accept cultivated meatâ should all be something like âitâs too difficult to predict because AGI will radically change the political and cultural environmentâ.
If you write an analysis that only makes sense in an implausible worldâthe one where AGI accelerates technology, but nothing else happensâthen that still has some positive value as long as that world has a non-zero chance of obtaining. But the framing of the piece directs animal activists to focus their attention on that world, and I think thatâs the wrong world to focus on because itâs so improbable. At minimum, the piece doesnât make it clear that this is an assumption.
Hereâs what I would say: Yes, some things, like how society will be structured post-AGI, are very hard to predict. But other things about AGI are predictable. I can predict that it speeds up almost all kinds of work. I can predict that AGI will control the shape of the futureâeither because it has explicit control, or because humans retain control but still rely on AGI to do most of the work (because AGI is better than humans at almost all tasks). I can predict that, on our current trajectory, ASI will follow shortly after AGI (see AI As Profoundly Abnormal Technology). I can predict that if ASI is misaligned, then it will wipe out all life on earth.
There is an argument that animal activists should advocate to pause AI development because (1) probably after AGI and certainly after ASI, animal activists will have no power; and (2) it is too difficult to predict whether AGI/âASI would be good for animals, so we need to buy more time to figure out how to make it good. This is an argument from conservatism rather than an argument from expected value; I still think AI pause advocacy has high expected value, but that argument is harder to make. (Iâm currently writing something longer about why pausing AI is a good idea even setting aside the alignment problem, although my argument isnât just about animal welfare.)
Another important question is how to shape the values of AGI (about half the ideas in this list are some version of that). There are things we can do now to influence the direction of AI development.
Another way to reason about post-AGI worlds is to think about AI alignment. e.g. I have three recent posts about what forms of alignment are more likely to be good for animals (1, 2, 3). A big thing that I think is under-appreciated by animal activists is that the way AI is alignedâif itâs alignedâcould have a big effect. I donât just mean âwill care-for-animals be built in to its valuesâ, but that there are differences between how different types of alignment are supposed to work, e.g. constitutional AI vs. agent foundations (see 1).
Speaking of âif itâs alignedâ, maybe the best thing to do is to work directly on AI alignment. I wrote a relevant post here. I believe a good number of AI safety researchers think that solving the alignment problem is the best way to help animals, although I canât recall any piece of writing where someone clearly articulated this position.
ETA: More on this bit:
Rather than predicting that AGI wonât change institutional structures, to allow actionable analysis I could instead say:
This scenario is both easier to analyze (you can ignore political and regulatory factors and just focus on the text content of the AI constitution) and more likely to actually happen (although I still think itâs pretty unlikely).
Thanks for your reply!
I definitely take your point about âI used a narrow definition of AGI because I think thatâs where actionable analysis can be made, but I agree its not necessarily enough.â â I think I could have worded that better.
What I meant was that I think the world I discuss is plausible and we can get some actionable analysis from it, which can get us some way to identifying what actions may be more robust across different scenarios. (I agree we wouldnât want to discuss scenarios that are impossible.)
It seems the difference in our views here is that I think itâs possible institutions and consumer preferences are quite sticky, at least for a little while; e.g. society imposes that humans have to be final decision-makers for longer than youâd expect (perhaps because legacy rules persist or people strongly prefer slower human-in-the-loop processes, or something else), or consumers really want âtraditionalâ food that they know, like, and trust regardless of their economic position. If you think thereâs a 0% chance that can happen, then it makes sense not to agree with what I describe above.
I probably wonât carry on replying here, but I do appreciate you taking the time to explain your view, it made me think about the framing of my post and my viewpoint a lot more.