I agree that the future will be profoundly weird, although it’s an extra step to claim that the future will be profoundly weird in a way that change what actions animal welfare folks should take (as opposed to being weird in some orthogonal manner).
Yeah, the future described in this post isn’t particuarly “weird”, per se, it’s just using the assumption that every technology that has been hypothetically proposed for the future will be created by ASI soon after AGI arrives.
I think the future will be a lot more unpredictable than this. Analolgously, I can imagine someone from 1965 being very confused about a future where immensely powerful computers can fit in your pocket, but human spaceflight had gone no further than the moon. It’s very hard to predict in advance the constraints and shortcomings of future technology, or the practical and logistical factors that affect what is achieved.
IMO the stance of “AI is too unpredictable, so I won’t consider it in my prioritization” is pretty reasonable. I was more trying to argue against stances like “AI is a huge deal specifically in that it will rapidly accelerate technological development, but nothing else about society will change.” For example, I commonly see animal activists say that AGI will solve the technical problem of cultivated meat, but there will still be regulatory hurdles. If AGI is too unpredictable, then you shouldn’t make predictions about which technological problems it will solve. That particular claim about cultivated meat is making a strong prediction that AI will be revolutionary, but also somehow won’t change the regulatory environment. The way I put it in OP—under “AGI = intelligence”—is that some animal activists treat AI as a technology-accelerator, when really it’s a general intelligence.
I agree that the future will be profoundly weird, although it’s an extra step to claim that the future will be profoundly weird in a way that change what actions animal welfare folks should take (as opposed to being weird in some orthogonal manner).
Yeah, the future described in this post isn’t particuarly “weird”, per se, it’s just using the assumption that every technology that has been hypothetically proposed for the future will be created by ASI soon after AGI arrives.
I think the future will be a lot more unpredictable than this. Analolgously, I can imagine someone from 1965 being very confused about a future where immensely powerful computers can fit in your pocket, but human spaceflight had gone no further than the moon. It’s very hard to predict in advance the constraints and shortcomings of future technology, or the practical and logistical factors that affect what is achieved.
Paraphrasing from my other comment:
IMO the stance of “AI is too unpredictable, so I won’t consider it in my prioritization” is pretty reasonable. I was more trying to argue against stances like “AI is a huge deal specifically in that it will rapidly accelerate technological development, but nothing else about society will change.” For example, I commonly see animal activists say that AGI will solve the technical problem of cultivated meat, but there will still be regulatory hurdles. If AGI is too unpredictable, then you shouldn’t make predictions about which technological problems it will solve. That particular claim about cultivated meat is making a strong prediction that AI will be revolutionary, but also somehow won’t change the regulatory environment. The way I put it in OP—under “AGI = intelligence”—is that some animal activists treat AI as a technology-accelerator, when really it’s a general intelligence.