Thanks Lizka and Ben! I found this post really thought-provoking. I’m curious to better understand the intuition behind discounting the post-AGI paradigm shift impacts to ~0.
My sense is that there’s still a pretty wide continuum of future possible outcomes, under some of which we should predictably expect current policies to endure. To simplify, consider six broad buckets of possible outcomes by the year 2050, applied to your example of whether the McDonald’s cage-free policy remains relevant.
No physical humans left. We’re all mind uploads or something more dystopian. Clearly the mind uploads won’t be needing cage-free McMuffins.
Humans remain but food is upended. We all eat cultivated meat, or rats (Terminator), or Taco Bell (Demolition Man). I also agree McDonald’s policy is irrelevant … though Taco Bell is cage-free too ;)
Radical change, people eat similar foods but no longer want McDonald’s. Maybe they have vast wealth and McDonald’s fails to keep up with their new luxurious tastes, or maybe they can’t afford even a cheap McMuffin. Either way, the cage-free policy is irrelevant.
Radical change, but McDonald’s survives. No matter how weird the future is, people still want cheap tasty convenient food, and have long established brand attachments to McDonald’s. Even if McDonald’s fires all its staff, it’s not clear to me why it would drop its cage-free policy.
AGI is more like the Internet. The cage-free McMuffins endure, just with some cool LLM-generated images on them.
No AGI.
I agree that scenarios 1-3 are possible, but they don’t seem obviously more likely to me than 4-6. At the very least, scenarios 4-6 don’t feel so unlikely that we should discount them to ~0. What am I missing?
4-6 seem like compelling reasons to discount the intersection of AI and animals work (which is what this post is addressing), because AI won’t be changing what’s important for animals very much in those scenarios. I don’t think the post makes any comment on the value of current, conventional animal welfare work in absolute terms.
I actually want to make both claims! I agree that it’s true that, if the future looks basically like the present, then probably you don’t need to care much about the paradigm shift (i.e. AI). But I also think the future will not look like the present so you should heavily discount interventions targeted at pre-paradigm-shift worlds unless they pay off soon.
Good question and thanks for the concrete scenarios! I think my tl;dr here is something like “even when you imagine ‘normalish’ futures, they are probably weirder than you are imagining.”
Even if McDonald’s fires all its staff, it’s not clear to me why it would drop its cage-free policy
I don’t think we want to make the claim that McDonald’s will definitely drop its cage free policy but rather the weaker claim that you should not assume that the value of a cage-free commitment will remain ~constant by default.
If I’m assuming that we are in a world where all of the human labor at McDonald’s has been automated away, I think that is a pretty weird world. As you note, even the existence of something like McDonald’s (much less a specific corporate entity which feels bound by the agreements of current-day McDonald’s) is speculative.
But even if we grant its existence: a ~40% egg price increase is currently enough that companies feel cover to be justified in abandoning their cage-free pledges. Surely “the entire global order has been upended and the new corporate management is robots” is an even better excuse?
And even if we somehow hold McDonald’s to their pledge, I find it hard to believe that a world where McDonald’s can be run without humans does not quickly lead to a world where something more profitable than battery cage farming can be found. And, as a result, the cage-free pledge is irrelevant because McDonald’s isn’t going to use cages anyway. (Of course, this new farming method may be even more cruel than battery cages, illustrating one of the downsides of trying to lock in a specific policy change before we understand what the future will be like.)
To be clear: this is just me randomly spouting, I don’t believe strongly in any of the above. I think it’s possible someone could come up with a strong argument why present-day corporate pledges will continue post-paradigm-shift. But my point is that you shouldn’t assume that this argument exists by default.
AGI is more like the Internet. The cage-free McMuffins endure, just with some cool LLM-generated images on them.
Yeah I think this world is (by assumption) one where cage free pledges should not receive a massive discount.
No AGI
Note that some worlds where we wouldn’t get AGI soon (e.g. large-scale nuclear war setting science back 200 years) are also probably not great for the expected value of cage-free pledges.
(It is good to hear though that even in the maximally dystopian world of universal Taco Bell there will be some upside for animals 🙂.)
Under 4, we should consider possibilities of massive wealth gains from automation, and that the cage-free shift would have happened anyway or at much lower (relative) cost without our work before the AI transition and paradigm shift. People still want to eat eggs out of habit, food neophobia or for cultural or political or any other reasons. However, consumers become so wealthy that the difference in cost between caged and cage-free is no longer significant to them, and they would just pay it, or are much more open to cage-free (and other high welfare) legislation..
Or, maybe some animal advocates (who invested in AI or even the market broadly) become so wealthy that they could subsidize people or farms to switch to cage-free. If this is “our money”, then this looks more like investing to give and just optimal donation timing. If this is not “our money”, say, because we’re not coordinating that closely with these advocates and have little influence on their choices, then it looks like someone else solving the problem later.
I think this is a good way of thinking about it and I like your classification. Also agree that neartermist animal welfare interventions shouldn’t discounted to ~0. I disagree with the claim that scenarios 1–3 are not obviously more likely than 4–6. #6 seems somewhat plausible but I think #4 and #5 are highly unlikely.
The weakest possible version of AGI is something like “take all the technological and economic advances being made by the smartest people in the world, and now rapidly accelerate those advancements because you can run many copies of equally-smart AIs.” (That’s the minimum outcome; I think the more likely outcome is “AGI is radically smarter than the smartest human”.)
RE #5, I can’t imagine a world where an innovation like “greatly increase the number of world-class researchers” would be on par with the Internet in terms of impactfulness.
RE #4, if technological chance is happening that quickly, it seems implausible that McDonald’s will survive. They didn’t have anything comparable to McDonald’s 1000 years ago. They couldn’t have even imagined McDonald’s. I predict that a decade after TAI, if we’re still alive, then whatever stuff we have will look nothing like McDonald’s, in the same way that McDonald’s looks nothing like the stuff people had in medieval times.
I don’t think outcome #4 is crazy unlikely, but I do think it’s clearly less likely than #1–3.
RE #4, if technological chance is happening that quickly, it seems implausible that McDonald’s will survive. They didn’t have anything comparable to McDonald’s 1000 years ago. They couldn’t have even imagined McDonald’s. I predict that a decade after TAI, if we’re still alive, then whatever stuff we have will look nothing like McDonald’s, in the same way that McDonald’s looks nothing like the stuff people had in medieval times.
If we’re still alive, most of the same people will still be alive, and their tastes, habits and values will only have changed so much. Think conservatives, people against alt proteins and others who grew up with McDonald’s. 1000 years is enough time for dramatic shifts in culture and values, but 10 years doesn’t seem to be. I suspect shifts in culture and values are primarily driven by newer generations just growing up to have different values and older generations with older values dying, not people changing their minds.
And radical life extension might make some values and practices persist far longer than they would have otherwise, although I’m not sure how much people who’d still want to eat conventional meat would opt for radical life extension.
I would predict maybe 65% chance of outcome #1 (50% chance everyone dies, 15% chance we get mind uploads or something); 15% chance of #2; 15% chance of #6; low chance of #3 or #4; basically zero chance of #5.
I’d add to this that you do also have the possibility that 1-3 happen, but they happen much later than many people currently think. My personal take is that the probability that ‘either AGI’s impact comes in more than ten years or it’s not that radical’ is >50%, certainly far more than 0%.
Thanks Lizka and Ben! I found this post really thought-provoking. I’m curious to better understand the intuition behind discounting the post-AGI paradigm shift impacts to ~0.
My sense is that there’s still a pretty wide continuum of future possible outcomes, under some of which we should predictably expect current policies to endure. To simplify, consider six broad buckets of possible outcomes by the year 2050, applied to your example of whether the McDonald’s cage-free policy remains relevant.
No physical humans left. We’re all mind uploads or something more dystopian. Clearly the mind uploads won’t be needing cage-free McMuffins.
Humans remain but food is upended. We all eat cultivated meat, or rats (Terminator), or Taco Bell (Demolition Man). I also agree McDonald’s policy is irrelevant … though Taco Bell is cage-free too ;)
Radical change, people eat similar foods but no longer want McDonald’s. Maybe they have vast wealth and McDonald’s fails to keep up with their new luxurious tastes, or maybe they can’t afford even a cheap McMuffin. Either way, the cage-free policy is irrelevant.
Radical change, but McDonald’s survives. No matter how weird the future is, people still want cheap tasty convenient food, and have long established brand attachments to McDonald’s. Even if McDonald’s fires all its staff, it’s not clear to me why it would drop its cage-free policy.
AGI is more like the Internet. The cage-free McMuffins endure, just with some cool LLM-generated images on them.
No AGI.
I agree that scenarios 1-3 are possible, but they don’t seem obviously more likely to me than 4-6. At the very least, scenarios 4-6 don’t feel so unlikely that we should discount them to ~0. What am I missing?
4-6 seem like compelling reasons to discount the intersection of AI and animals work (which is what this post is addressing), because AI won’t be changing what’s important for animals very much in those scenarios. I don’t think the post makes any comment on the value of current, conventional animal welfare work in absolute terms.
I actually want to make both claims! I agree that it’s true that, if the future looks basically like the present, then probably you don’t need to care much about the paradigm shift (i.e. AI). But I also think the future will not look like the present so you should heavily discount interventions targeted at pre-paradigm-shift worlds unless they pay off soon.
Good question and thanks for the concrete scenarios! I think my tl;dr here is something like “even when you imagine ‘normalish’ futures, they are probably weirder than you are imagining.”
I don’t think we want to make the claim that McDonald’s will definitely drop its cage free policy but rather the weaker claim that you should not assume that the value of a cage-free commitment will remain ~constant by default.
If I’m assuming that we are in a world where all of the human labor at McDonald’s has been automated away, I think that is a pretty weird world. As you note, even the existence of something like McDonald’s (much less a specific corporate entity which feels bound by the agreements of current-day McDonald’s) is speculative.
But even if we grant its existence: a ~40% egg price increase is currently enough that companies feel cover to be justified in abandoning their cage-free pledges. Surely “the entire global order has been upended and the new corporate management is robots” is an even better excuse?
And even if we somehow hold McDonald’s to their pledge, I find it hard to believe that a world where McDonald’s can be run without humans does not quickly lead to a world where something more profitable than battery cage farming can be found. And, as a result, the cage-free pledge is irrelevant because McDonald’s isn’t going to use cages anyway. (Of course, this new farming method may be even more cruel than battery cages, illustrating one of the downsides of trying to lock in a specific policy change before we understand what the future will be like.)
To be clear: this is just me randomly spouting, I don’t believe strongly in any of the above. I think it’s possible someone could come up with a strong argument why present-day corporate pledges will continue post-paradigm-shift. But my point is that you shouldn’t assume that this argument exists by default.
Yeah I think this world is (by assumption) one where cage free pledges should not receive a massive discount.
Note that some worlds where we wouldn’t get AGI soon (e.g. large-scale nuclear war setting science back 200 years) are also probably not great for the expected value of cage-free pledges.
(It is good to hear though that even in the maximally dystopian world of universal Taco Bell there will be some upside for animals 🙂.)
Under 4, we should consider possibilities of massive wealth gains from automation, and that the cage-free shift would have happened anyway or at much lower (relative) cost without our work before the AI transition and paradigm shift. People still want to eat eggs out of habit, food neophobia or for cultural or political or any other reasons. However, consumers become so wealthy that the difference in cost between caged and cage-free is no longer significant to them, and they would just pay it, or are much more open to cage-free (and other high welfare) legislation..
Or, maybe some animal advocates (who invested in AI or even the market broadly) become so wealthy that they could subsidize people or farms to switch to cage-free. If this is “our money”, then this looks more like investing to give and just optimal donation timing. If this is not “our money”, say, because we’re not coordinating that closely with these advocates and have little influence on their choices, then it looks like someone else solving the problem later.
I think this is a good way of thinking about it and I like your classification. Also agree that neartermist animal welfare interventions shouldn’t discounted to ~0. I disagree with the claim that scenarios 1–3 are not obviously more likely than 4–6. #6 seems somewhat plausible but I think #4 and #5 are highly unlikely.
The weakest possible version of AGI is something like “take all the technological and economic advances being made by the smartest people in the world, and now rapidly accelerate those advancements because you can run many copies of equally-smart AIs.” (That’s the minimum outcome; I think the more likely outcome is “AGI is radically smarter than the smartest human”.)
RE #5, I can’t imagine a world where an innovation like “greatly increase the number of world-class researchers” would be on par with the Internet in terms of impactfulness.
RE #4, if technological chance is happening that quickly, it seems implausible that McDonald’s will survive. They didn’t have anything comparable to McDonald’s 1000 years ago. They couldn’t have even imagined McDonald’s. I predict that a decade after TAI, if we’re still alive, then whatever stuff we have will look nothing like McDonald’s, in the same way that McDonald’s looks nothing like the stuff people had in medieval times.
I don’t think outcome #4 is crazy unlikely, but I do think it’s clearly less likely than #1–3.
If we’re still alive, most of the same people will still be alive, and their tastes, habits and values will only have changed so much. Think conservatives, people against alt proteins and others who grew up with McDonald’s. 1000 years is enough time for dramatic shifts in culture and values, but 10 years doesn’t seem to be. I suspect shifts in culture and values are primarily driven by newer generations just growing up to have different values and older generations with older values dying, not people changing their minds.
And radical life extension might make some values and practices persist far longer than they would have otherwise, although I’m not sure how much people who’d still want to eat conventional meat would opt for radical life extension.
I would predict maybe 65% chance of outcome #1 (50% chance everyone dies, 15% chance we get mind uploads or something); 15% chance of #2; 15% chance of #6; low chance of #3 or #4; basically zero chance of #5.
I’d add to this that you do also have the possibility that 1-3 happen, but they happen much later than many people currently think. My personal take is that the probability that ‘either AGI’s impact comes in more than ten years or it’s not that radical’ is >50%, certainly far more than 0%.