Post-singularity worlds where people have the freedom to cause enormous animal suffering as a byproduct of legacy food production methods, despite having the option to not do so fully subsidized by third parties, seem like they probably overlap substantially with worlds where people have the freedom to spin up large quantities of digital entities capable of suffering and torture them forever. If you think such outcomes are likely, I claim that this is even more worthy of intervention. I personally don’t expect to have either option in most post-singularity worlds where we’re around, though I guess I would be slightly less surprised to have the option to torture animals than the option to torture ems (though I haven’t thought about it too hard yet).
But, as I said above, if you think it’s plausible that we’ll have the option to continue torturing animals post-singularity, this seems like a much more important outcome to try to avert than anything happening today.
Coming back to this, on what timeline do you expect this kind of growth in wealth to happen, making animal welfare extremely cheap?
Or, with what probability would it not happen for at least another 15 years (and us not all dying for at least that long)? I’d guess 15 years is long enough for many animal welfare interventions to have significant impact, although on the shorter end. Some take several years to have any welfare impacts at all.
I’m imagining we could just discount welfare impacts by such probabilities. Animal welfare could still look quite cost-effective even after that, but it’ll depend on the probabilities.
If I’m understanding your question correctly, that part of my expectation is almost entirely conditional on being in a post-ASI world. Before then, if interest in (effectively) reducing animal suffering stays roughly the size of “EA”, then I don’t particularly expect it to be become cheap enough to subsize people farming animals to raise them in humane conditions. (This expectation becomes weaker with longer AI timelines, but I haven’t though that hard about what the world looks like in 20+ years without strong AI, and how that affects the marginal cost of various farmed animal welfare interventions.)
So my timelines on that are pretty much just my AI timelines, conditioned on “we don’t all die” (which are shifted a bit longer than my overall AI timelines, but not by that much).
Post-singularity worlds where people have the freedom to cause enormous animal suffering as a byproduct of legacy food production methods, despite having the option to not do so fully subsidized by third parties, seem like they probably overlap substantially with worlds where people have the freedom to spin up large quantities of digital entities capable of suffering and torture them forever. If you think such outcomes are likely, I claim that this is even more worthy of intervention. I personally don’t expect to have either option in most post-singularity worlds where we’re around, though I guess I would be slightly less surprised to have the option to torture animals than the option to torture ems (though I haven’t thought about it too hard yet).
But, as I said above, if you think it’s plausible that we’ll have the option to continue torturing animals post-singularity, this seems like a much more important outcome to try to avert than anything happening today.
Coming back to this, on what timeline do you expect this kind of growth in wealth to happen, making animal welfare extremely cheap?
Or, with what probability would it not happen for at least another 15 years (and us not all dying for at least that long)? I’d guess 15 years is long enough for many animal welfare interventions to have significant impact, although on the shorter end. Some take several years to have any welfare impacts at all.
I’m imagining we could just discount welfare impacts by such probabilities. Animal welfare could still look quite cost-effective even after that, but it’ll depend on the probabilities.
If I’m understanding your question correctly, that part of my expectation is almost entirely conditional on being in a post-ASI world. Before then, if interest in (effectively) reducing animal suffering stays roughly the size of “EA”, then I don’t particularly expect it to be become cheap enough to subsize people farming animals to raise them in humane conditions. (This expectation becomes weaker with longer AI timelines, but I haven’t though that hard about what the world looks like in 20+ years without strong AI, and how that affects the marginal cost of various farmed animal welfare interventions.)
So my timelines on that are pretty much just my AI timelines, conditioned on “we don’t all die” (which are shifted a bit longer than my overall AI timelines, but not by that much).