I haven’t figured out how to organize my thoughts well, so forgive me if this is unclear or disjointed.
I used a narrow definition of AGI because I think that’s where actionable analysis can be made
Even assuming AI reaches your narrow definition of AGI and then stops advancing, AGI would still radically change the economic and political environment. The bits in this post about behavior of regulators become irrelevant if regulators are replaced by AI.
I don’t think it’s fair to say (e.g.) “it’s hard to predict how politics/government will change, so I’ll just assume they won’t.” Predicting that nothing will change is still a prediction! If you’re going to take the route of not making predictions where the future is especially fuzzy, then I think that’s more defensible, in which case the answers on “Regulators speed up”, “Political opposition reverses”, and “Consumers accept cultivated meat” should all be something like “it’s too difficult to predict because AGI will radically change the political and cultural environment”.
If you write an analysis that only makes sense in an implausible world—the one where AGI accelerates technology, but nothing else happens—then that still has some positive value as long as that world has a non-zero chance of obtaining. But the framing of the piece directs animal activists to focus their attention on that world, and I think that’s the wrong world to focus on because it’s so improbable. At minimum, the piece doesn’t make it clear that this is an assumption.
If you have recommendations for how to reason about worlds where current baselines genuinely don’t extrapolate at all, I’d really welcome them!
Here’s what I would say: Yes, some things, like how society will be structured post-AGI, are very hard to predict. But other things about AGI are predictable. I can predict that it speeds up almost all kinds of work. I can predict that AGI will control the shape of the future—either because it has explicit control, or because humans retain control but still rely on AGI to do most of the work (because AGI is better than humans at almost all tasks). I can predict that, on our current trajectory, ASI will follow shortly after AGI (see AI As Profoundly Abnormal Technology). I can predict that if ASI is misaligned, then it will wipe out all life on earth.
There is an argument that animal activists should advocate to pause AI development because (1) probably after AGI and certainly after ASI, animal activists will have no power; and (2) it is too difficult to predict whether AGI/ASI would be good for animals, so we need to buy more time to figure out how to make it good. This is an argument from conservatism rather than an argument from expected value; I still think AI pause advocacy has high expected value, but that argument is harder to make. (I’m currently writing something longer about why pausing AI is a good idea even setting aside the alignment problem, although my argument isn’t just about animal welfare.)
Another important question is how to shape the values of AGI (about half the ideas in this list are some version of that). There are things we can do now to influence the direction of AI development.
Another way to reason about post-AGI worlds is to think about AI alignment. e.g. I have three recent posts about what forms of alignment are more likely to be good for animals (1, 2, 3). A big thing that I think is under-appreciated by animal activists is that the way AI is aligned—if it’s aligned—could have a big effect. I don’t just mean “will care-for-animals be built in to its values”, but that there are differences between how different types of alignment are supposed to work, e.g. constitutional AI vs. agent foundations (see 1).
Speaking of “if it’s aligned”, maybe the best thing to do is to work directly on AI alignment. I wrote a relevant post here. I believe a good number of AI safety researchers think that solving the alignment problem is the best way to help animals, although I can’t recall any piece of writing where someone clearly articulated this position.
ETA: More on this bit:
[W]e don’t address scenarios in which AGI drastically reshapes institutional and political dynamics. [...] [W]e focus on existing institutional structures because they allow actionable analysis, but we acknowledge this is a limitation.
Rather than predicting that AGI won’t change institutional structures, to allow actionable analysis I could instead say:
My assumption is that first AGI will become a world government singleton, and its values will be determined by its constitution.
This scenario is both easier to analyze (you can ignore political and regulatory factors and just focus on the text content of the AI constitution) and more likely to actually happen (although I still think it’s pretty unlikely).
I definitely take your point about “I used a narrow definition of AGI because I think that’s where actionable analysis can be made, but I agree its not necessarily enough.” – I think I could have worded that better.
What I meant was that I think the world I discuss is plausible and we can get some actionable analysis from it, which can get us some way to identifying what actions may be more robust across different scenarios. (I agree we wouldn’t want to discuss scenarios that are impossible.)
It seems the difference in our views here is that I think it’s possible institutions and consumer preferences are quite sticky, at least for a little while; e.g. society imposes that humans have to be final decision-makers for longer than you’d expect (perhaps because legacy rules persist or people strongly prefer slower human-in-the-loop processes, or something else), or consumers really want ‘traditional’ food that they know, like, and trust regardless of their economic position. If you think there’s a 0% chance that can happen, then it makes sense not to agree with what I describe above.
I probably won’t carry on replying here, but I do appreciate you taking the time to explain your view, it made me think about the framing of my post and my viewpoint a lot more.
I haven’t figured out how to organize my thoughts well, so forgive me if this is unclear or disjointed.
Even assuming AI reaches your narrow definition of AGI and then stops advancing, AGI would still radically change the economic and political environment. The bits in this post about behavior of regulators become irrelevant if regulators are replaced by AI.
I don’t think it’s fair to say (e.g.) “it’s hard to predict how politics/government will change, so I’ll just assume they won’t.” Predicting that nothing will change is still a prediction! If you’re going to take the route of not making predictions where the future is especially fuzzy, then I think that’s more defensible, in which case the answers on “Regulators speed up”, “Political opposition reverses”, and “Consumers accept cultivated meat” should all be something like “it’s too difficult to predict because AGI will radically change the political and cultural environment”.
If you write an analysis that only makes sense in an implausible world—the one where AGI accelerates technology, but nothing else happens—then that still has some positive value as long as that world has a non-zero chance of obtaining. But the framing of the piece directs animal activists to focus their attention on that world, and I think that’s the wrong world to focus on because it’s so improbable. At minimum, the piece doesn’t make it clear that this is an assumption.
Here’s what I would say: Yes, some things, like how society will be structured post-AGI, are very hard to predict. But other things about AGI are predictable. I can predict that it speeds up almost all kinds of work. I can predict that AGI will control the shape of the future—either because it has explicit control, or because humans retain control but still rely on AGI to do most of the work (because AGI is better than humans at almost all tasks). I can predict that, on our current trajectory, ASI will follow shortly after AGI (see AI As Profoundly Abnormal Technology). I can predict that if ASI is misaligned, then it will wipe out all life on earth.
There is an argument that animal activists should advocate to pause AI development because (1) probably after AGI and certainly after ASI, animal activists will have no power; and (2) it is too difficult to predict whether AGI/ASI would be good for animals, so we need to buy more time to figure out how to make it good. This is an argument from conservatism rather than an argument from expected value; I still think AI pause advocacy has high expected value, but that argument is harder to make. (I’m currently writing something longer about why pausing AI is a good idea even setting aside the alignment problem, although my argument isn’t just about animal welfare.)
Another important question is how to shape the values of AGI (about half the ideas in this list are some version of that). There are things we can do now to influence the direction of AI development.
Another way to reason about post-AGI worlds is to think about AI alignment. e.g. I have three recent posts about what forms of alignment are more likely to be good for animals (1, 2, 3). A big thing that I think is under-appreciated by animal activists is that the way AI is aligned—if it’s aligned—could have a big effect. I don’t just mean “will care-for-animals be built in to its values”, but that there are differences between how different types of alignment are supposed to work, e.g. constitutional AI vs. agent foundations (see 1).
Speaking of “if it’s aligned”, maybe the best thing to do is to work directly on AI alignment. I wrote a relevant post here. I believe a good number of AI safety researchers think that solving the alignment problem is the best way to help animals, although I can’t recall any piece of writing where someone clearly articulated this position.
ETA: More on this bit:
Rather than predicting that AGI won’t change institutional structures, to allow actionable analysis I could instead say:
This scenario is both easier to analyze (you can ignore political and regulatory factors and just focus on the text content of the AI constitution) and more likely to actually happen (although I still think it’s pretty unlikely).
Thanks for your reply!
I definitely take your point about “I used a narrow definition of AGI because I think that’s where actionable analysis can be made, but I agree its not necessarily enough.” – I think I could have worded that better.
What I meant was that I think the world I discuss is plausible and we can get some actionable analysis from it, which can get us some way to identifying what actions may be more robust across different scenarios. (I agree we wouldn’t want to discuss scenarios that are impossible.)
It seems the difference in our views here is that I think it’s possible institutions and consumer preferences are quite sticky, at least for a little while; e.g. society imposes that humans have to be final decision-makers for longer than you’d expect (perhaps because legacy rules persist or people strongly prefer slower human-in-the-loop processes, or something else), or consumers really want ‘traditional’ food that they know, like, and trust regardless of their economic position. If you think there’s a 0% chance that can happen, then it makes sense not to agree with what I describe above.
I probably won’t carry on replying here, but I do appreciate you taking the time to explain your view, it made me think about the framing of my post and my viewpoint a lot more.