IMO the stance of “AI is too unpredictable, so I won’t consider it in my prioritization” is pretty reasonable. I was more trying to argue against stances like “AI is a huge deal specifically in that it will rapidly accelerate technological development, but nothing else about society will change.” For example, I commonly see animal activists say that AGI will solve the technical problem of cultivated meat, but there will still be regulatory hurdles. If AGI is too unpredictable, then you shouldn’t make predictions about which technological problems it will solve. That particular claim about cultivated meat is making a strong prediction that AI will be revolutionary, but also somehow won’t change the regulatory environment. The way I put it in OP—under “AGI = intelligence”—is that some animal activists treat AI as a technology-accelerator, when really it’s a general intelligence.
Paraphrasing from my other comment:
IMO the stance of “AI is too unpredictable, so I won’t consider it in my prioritization” is pretty reasonable. I was more trying to argue against stances like “AI is a huge deal specifically in that it will rapidly accelerate technological development, but nothing else about society will change.” For example, I commonly see animal activists say that AGI will solve the technical problem of cultivated meat, but there will still be regulatory hurdles. If AGI is too unpredictable, then you shouldn’t make predictions about which technological problems it will solve. That particular claim about cultivated meat is making a strong prediction that AI will be revolutionary, but also somehow won’t change the regulatory environment. The way I put it in OP—under “AGI = intelligence”—is that some animal activists treat AI as a technology-accelerator, when really it’s a general intelligence.