Good question! I think that the best path forward requires taking a “both-and” approach. Ideally we can (a) slow down AI development to buy AI ethics, safety, and sentience researchers time and (b) speed up these forms of research (focusing on moral, political, and technical issues) to make good use of this time. So, yes, I do think that we should avoid creating potentially sentient AI systems in the short term, though as my paper with Rob Long discusses, that might be easier said than done. As for whether we should create potentially sentient AI systems in the long run (and how individuals, companies, and governments should treat them to the extent that we do), that seems like a much harder question, and it will take serious research to address it. I hope that we can do some of that research in the coming years!
Good question! I think that the best path forward requires taking a “both-and” approach. Ideally we can (a) slow down AI development to buy AI ethics, safety, and sentience researchers time and (b) speed up these forms of research (focusing on moral, political, and technical issues) to make good use of this time. So, yes, I do think that we should avoid creating potentially sentient AI systems in the short term, though as my paper with Rob Long discusses, that might be easier said than done. As for whether we should create potentially sentient AI systems in the long run (and how individuals, companies, and governments should treat them to the extent that we do), that seems like a much harder question, and it will take serious research to address it. I hope that we can do some of that research in the coming years!