Not necessarily as severe as hitting a wall for the next few decades but in brief:
I don’t think agriculture is one industry that will be transformed that rapidly (e.g. on the order of months or years) due to AI, mainly because:
There are lots of traditions and cultural things around food so I don’t imagine people will want to dramatically change how they eat over the next decade or two
Agriculture involves human feedback loops (e.g. consumers trying products and giving feedback) which makes it hard to accelerate exponentially with AI progress (unlike predicting protein folding via AI or writing code)
Any advancements in AI are likely to benefit the animal agriculture industry just as much, if not more, as animal advocates
Given the animal ag industry is orders of magnitude richer than us, I assume they will have more capacity to try to integrate AI into their work e.g. intensifying factory farming (see this blog for more info). This further locks in the trends mentioned above
We will use AI for alt protein innovation but again, this has long feedback loops due to needing human taste tests and product development, so I just don’t think we’ll get dramatically better products that quickly
I think you misunderstood my framing; I should have been more clear.
We can bracket the case where we all die to misaligned AI, since that leads to all animals dying as well.
If we achieve transformative AI and then don’t all die (because we solved alignment), then I don’t think the world will continue to have an “agricultural industry” in any meaningful sense (or, really, any other traditional industry; strong nanotech seems like it ought to let you solve for nearly everything else). Even if the economics and sociology work out such that some people will want to continue farming real animals instead of enjoying the much cheaper cultured meat of vastly superior quality, there will be approximately nobody interested in ensuring those animals are suffering, and the cost for ensuring that they don’t suffer will be trivial.
Of course, this assumes we solve alignment and also end up pointed in the right direction. For a variety of reasons it seems pretty unlikely to me that we manage to robustly solve alignment of superintelligent AIs while pointed in “wrong”[1] directions; that sort of philosophical unsophistication is why I’m pessimistic on our odds of success. But other people disagree, and if you think it’s at all plausible that we achieve TAI in a way that locks in reflectively-unendorsed values which lead to huge quantities of animal suffering, that seems like it ought to dominate effectively all other considerations in terms of interventions w.r.t. future animal welfare.
Like those that lead to enormous quantities of trivially preventable animal suffering for basically dumb contingent reasons, i.e. “the people doing the pointing weren’t really thinking about it at the time, and most people don’t actually care about animal suffering at all in most possible reflective equilibria”.
People could still pay for animal products from animals that are more traditionally/conventionally raised, without new technologies, for political, religious or other identity reasons. This could be most (social) conservatives. And AI could make the world’s poor (social) conservatives much wealthier and able to afford far more animal products.
The cost to avert the suffering would be trivial, but so too would be the cost of conventional animal products, if and because everyone is rich. People could also refuse payment or other incentives to avoid conventional animal products.
And animals could continue to be bred for productivity at the cost of welfare, like has been done for broilers (although this trend for broilers is reversing in the West, because of animal advocacy from our extended community). Genetic engineering could also happen, but people averse to cultured and plant-based products might be averse to that, too, anyway. Some may be selectively averse, though.
I’d guess slaughter would be more humane, with better stunning, and maybe anaesthesia/painkillers would actually be used widely for painful mutilation procedures, and/or we’d just mutilate less. I’d guess people mostly wouldn’t oppose those, although many Muslims do oppose stunning for slaughter for religious reasons.
And high welfare farming can be done in ways acceptable to conservatives, e.g. fairly natural, natural breeds, mostly outdoors, or with lots of space and enrichment. However, they may oppose transition towards that for political reasons. They tend to vote against farmed animal welfare reforms, probably not just for cost reasons, but also for political reasons.
Post-singularity worlds where people have the freedom to cause enormous animal suffering as a byproduct of legacy food production methods, despite having the option to not do so fully subsidized by third parties, seem like they probably overlap substantially with worlds where people have the freedom to spin up large quantities of digital entities capable of suffering and torture them forever. If you think such outcomes are likely, I claim that this is even more worthy of intervention. I personally don’t expect to have either option in most post-singularity worlds where we’re around, though I guess I would be slightly less surprised to have the option to torture animals than the option to torture ems (though I haven’t thought about it too hard yet).
But, as I said above, if you think it’s plausible that we’ll have the option to continue torturing animals post-singularity, this seems like a much more important outcome to try to avert than anything happening today.
For a variety of reasons it seems pretty unlikely to me that we manage to robustly solve alignment of superintelligent AIs while pointed in “wrong” directions; that sort of philosophical unsophistication is why I’m pessimistic on our odds of success.
This is an aside, but I’d be very interested to hear you expand on your reasons, if you have time. (I’m currently on a journey of trying to better understand how alignment relates to philosophical competence; see thread here.)
(Possibly worth clarifying up front: by “alignment,” do you mean “intent alignment,” as defined by Christiano, or do you mean something broader?)
Not necessarily as severe as hitting a wall for the next few decades but in brief:
I don’t think agriculture is one industry that will be transformed that rapidly (e.g. on the order of months or years) due to AI, mainly because:
There are lots of traditions and cultural things around food so I don’t imagine people will want to dramatically change how they eat over the next decade or two
Agriculture involves human feedback loops (e.g. consumers trying products and giving feedback) which makes it hard to accelerate exponentially with AI progress (unlike predicting protein folding via AI or writing code)
Any advancements in AI are likely to benefit the animal agriculture industry just as much, if not more, as animal advocates
Given the animal ag industry is orders of magnitude richer than us, I assume they will have more capacity to try to integrate AI into their work e.g. intensifying factory farming (see this blog for more info). This further locks in the trends mentioned above
We will use AI for alt protein innovation but again, this has long feedback loops due to needing human taste tests and product development, so I just don’t think we’ll get dramatically better products that quickly
I think you misunderstood my framing; I should have been more clear.
We can bracket the case where we all die to misaligned AI, since that leads to all animals dying as well.
If we achieve transformative AI and then don’t all die (because we solved alignment), then I don’t think the world will continue to have an “agricultural industry” in any meaningful sense (or, really, any other traditional industry; strong nanotech seems like it ought to let you solve for nearly everything else). Even if the economics and sociology work out such that some people will want to continue farming real animals instead of enjoying the much cheaper cultured meat of vastly superior quality, there will be approximately nobody interested in ensuring those animals are suffering, and the cost for ensuring that they don’t suffer will be trivial.
Of course, this assumes we solve alignment and also end up pointed in the right direction. For a variety of reasons it seems pretty unlikely to me that we manage to robustly solve alignment of superintelligent AIs while pointed in “wrong”[1] directions; that sort of philosophical unsophistication is why I’m pessimistic on our odds of success. But other people disagree, and if you think it’s at all plausible that we achieve TAI in a way that locks in reflectively-unendorsed values which lead to huge quantities of animal suffering, that seems like it ought to dominate effectively all other considerations in terms of interventions w.r.t. future animal welfare.
Like those that lead to enormous quantities of trivially preventable animal suffering for basically dumb contingent reasons, i.e. “the people doing the pointing weren’t really thinking about it at the time, and most people don’t actually care about animal suffering at all in most possible reflective equilibria”.
People could still pay for animal products from animals that are more traditionally/conventionally raised, without new technologies, for political, religious or other identity reasons. This could be most (social) conservatives. And AI could make the world’s poor (social) conservatives much wealthier and able to afford far more animal products.
The cost to avert the suffering would be trivial, but so too would be the cost of conventional animal products, if and because everyone is rich. People could also refuse payment or other incentives to avoid conventional animal products.
And animals could continue to be bred for productivity at the cost of welfare, like has been done for broilers (although this trend for broilers is reversing in the West, because of animal advocacy from our extended community). Genetic engineering could also happen, but people averse to cultured and plant-based products might be averse to that, too, anyway. Some may be selectively averse, though.
I’d guess slaughter would be more humane, with better stunning, and maybe anaesthesia/painkillers would actually be used widely for painful mutilation procedures, and/or we’d just mutilate less. I’d guess people mostly wouldn’t oppose those, although many Muslims do oppose stunning for slaughter for religious reasons.
And high welfare farming can be done in ways acceptable to conservatives, e.g. fairly natural, natural breeds, mostly outdoors, or with lots of space and enrichment. However, they may oppose transition towards that for political reasons. They tend to vote against farmed animal welfare reforms, probably not just for cost reasons, but also for political reasons.
Post-singularity worlds where people have the freedom to cause enormous animal suffering as a byproduct of legacy food production methods, despite having the option to not do so fully subsidized by third parties, seem like they probably overlap substantially with worlds where people have the freedom to spin up large quantities of digital entities capable of suffering and torture them forever. If you think such outcomes are likely, I claim that this is even more worthy of intervention. I personally don’t expect to have either option in most post-singularity worlds where we’re around, though I guess I would be slightly less surprised to have the option to torture animals than the option to torture ems (though I haven’t thought about it too hard yet).
But, as I said above, if you think it’s plausible that we’ll have the option to continue torturing animals post-singularity, this seems like a much more important outcome to try to avert than anything happening today.
This is an aside, but I’d be very interested to hear you expand on your reasons, if you have time. (I’m currently on a journey of trying to better understand how alignment relates to philosophical competence; see thread here.)
(Possibly worth clarifying up front: by “alignment,” do you mean “intent alignment,” as defined by Christiano, or do you mean something broader?)