Thanks for writing this Seth! I agree itâs possible that we will not see transformative effects from AI for a long time, if ever, and I think itâs reasonable for people to make plans which only pay off on the assumption that this is true. More specifically: projects which pay off under an assumption of short timelines often have other downsides, such as being more speculative, which means that the expected value of the long timeline plans can end up being higher even after you discount them for only working on long timelines.[1]
That being said, I think your post is underestimating how transformative truly transformative AI would be. As I said in a reply to Lewis Bollard who made a somewhat similar point:
If Iâm assuming that we are in a world where all of the human labor at McDonaldâs has been automated away, I think that is a pretty weird world. As you note, even the existence of something like McDonaldâs (much less a specific corporate entity which feels bound by the agreements of current-day McDonaldâs) is speculative.
But even if we grant its existence: a ~40% egg price increase is currently enough that companies feel cover to be justified in abandoning their cage-free pledges. Surely âthe entire global order has been upended and the new corporate management is robotsâ is an even better excuse?
And even if we somehow hold McDonaldâs to their pledge, I find it hard to believe that a world where McDonaldâs can be run without humans does not quickly lead to a world where something more profitable than battery cage farming can be found. And, as a result, the cage-free pledge is irrelevant because McDonaldâs isnât going to use cages anyway. (Of course, this new farming method may be even more cruel than battery cages, illustrating one of the downsides of trying to lock in a specific policy change before we understand what the future will be like.)
Although I would encourage people to actually try to estimate this and pressure test the assumption that there isnât actually a way that their work can pay off on a shorter timeline.
Hi Ben, I agree that there are a lot of intermediate weird outcomes that I donât consider, in large part because I see them as less likely than (I think) you do. I basically think society is going to keep chugging along as it is, in the same way that life with the internet is certainly different than life without it but we basically all still get up, go to work, seek love and community, etc.
However I donât think Iâm underestimating how transformative AI would be in the section on why my work continues to make sense to me if we assume AI is going to kill us all or usher in utopia, which I think could be fairly described as transformative scenarios ;)
If McDonalds becomes human-labor-free, I am not sure what effect that would have on advocating for cage-free campaigns. I could see it going many ways, or no ways. I still think persuading people that animals matter, and they should give cruelty-free options a chance, is going to matter under basically every scenario I could think of, including that one.
Thanks for writing this Seth! I agree itâs possible that we will not see transformative effects from AI for a long time, if ever, and I think itâs reasonable for people to make plans which only pay off on the assumption that this is true. More specifically: projects which pay off under an assumption of short timelines often have other downsides, such as being more speculative, which means that the expected value of the long timeline plans can end up being higher even after you discount them for only working on long timelines.[1]
That being said, I think your post is underestimating how transformative truly transformative AI would be. As I said in a reply to Lewis Bollard who made a somewhat similar point:
Although I would encourage people to actually try to estimate this and pressure test the assumption that there isnât actually a way that their work can pay off on a shorter timeline.
Hi Ben, I agree that there are a lot of intermediate weird outcomes that I donât consider, in large part because I see them as less likely than (I think) you do. I basically think society is going to keep chugging along as it is, in the same way that life with the internet is certainly different than life without it but we basically all still get up, go to work, seek love and community, etc.
However I donât think Iâm underestimating how transformative AI would be in the section on why my work continues to make sense to me if we assume AI is going to kill us all or usher in utopia, which I think could be fairly described as transformative scenarios ;)
If McDonalds becomes human-labor-free, I am not sure what effect that would have on advocating for cage-free campaigns. I could see it going many ways, or no ways. I still think persuading people that animals matter, and they should give cruelty-free options a chance, is going to matter under basically every scenario I could think of, including that one.