What I meant is that they can be MORE politically charged/mainstream/subject to motivated reasoning. I definitely agree that current incentives around AI don’t perfectly track good moral reasoning.
Yep, I agree (though I’m not sure if I agree that the incentive is clearly in the negation; one could argue that a company may want to say that they are worried about sentience to increase hype the same way some argue that talking about the risks of AI increases hype). I just think there will be more when the issue is in the minds of the public.
I think there are some mainstream things about digital minds (Black Mirror comes to mind), but I don’t think it’s a thing that people yet take seriously in the real world.
Thanks for the comment and good points.
What I meant is that they can be MORE politically charged/mainstream/subject to motivated reasoning. I definitely agree that current incentives around AI don’t perfectly track good moral reasoning.
Yep, I agree (though I’m not sure if I agree that the incentive is clearly in the negation; one could argue that a company may want to say that they are worried about sentience to increase hype the same way some argue that talking about the risks of AI increases hype). I just think there will be more when the issue is in the minds of the public.
I think there are some mainstream things about digital minds (Black Mirror comes to mind), but I don’t think it’s a thing that people yet take seriously in the real world.