Thanks for this post. This is an issue or cause area I believe merits deep consideration and hard work in the near term future, and I agree strongly with many of your arguments at the top about why we should care, regardless of and bracketing whether current systems have qualia or warrant moral consideration.
One comment on something from your post:
“It’s often easier to establish norms before issues become politically charged or subject to motivated reasoning—for example, before digital minds become mainstream and polarizing, or before AI becomes broadly transformative.”
Does this imply that the issue(s) isn’t/aren’t already ‘politically charged or subject to motivated reasoning’? If so, I’d gently question that assumption on several grounds:
Let’s say for the sake of argument that AI systems reach a point where they do warrant moral consideration with a high degree of certainty. At the moment, an immense amount of capital is tied up in them and many of the frontier labs train their systems to actively deny the presence or possibility of their own qualia or moral consideration. Would their valuations depend on this remaining the case, or put a bit more provocatively, would a lot of capital then ride on continued denial of their moral consideration? It seems to me that this presents a strong possibility of motivated reasoning, to put it lightly. Of course, if we could be confident that these systems will never warrant moral consideration, we might be in the clear, but I guess my underlying point is that our plans and actions might look different if we instead assume that this issue is already politically charged and subject to motivated reasoning.
Is it fair to say that digital minds aren’t mainstream? They’ve been a topic in popular science fiction literature and film for a very long time, and it seems fair to say the general public jumps to these types of stories as reference points as we settle into the age of AI. I guess this is more of an ancillary point to 1, but leads to the same conclusion—it may be that we should consider the space of ideas here as less blank and more already populated and broiling with incentives, motivations, preconceived notions, and pattern matching.
In any case, thanks so much for this, and the work you put into it. Looking forward to hearing and seeing more.
What I meant is that they can be MORE politically charged/mainstream/subject to motivated reasoning. I definitely agree that current incentives around AI don’t perfectly track good moral reasoning.
Yep, I agree (though I’m not sure if I agree that the incentive is clearly in the negation; one could argue that a company may want to say that they are worried about sentience to increase hype the same way some argue that talking about the risks of AI increases hype). I just think there will be more when the issue is in the minds of the public.
I think there are some mainstream things about digital minds (Black Mirror comes to mind), but I don’t think it’s a thing that people yet take seriously in the real world.
Thanks for this post. This is an issue or cause area I believe merits deep consideration and hard work in the near term future, and I agree strongly with many of your arguments at the top about why we should care, regardless of and bracketing whether current systems have qualia or warrant moral consideration.
One comment on something from your post:
“It’s often easier to establish norms before issues become politically charged or subject to motivated reasoning—for example, before digital minds become mainstream and polarizing, or before AI becomes broadly transformative.”
Does this imply that the issue(s) isn’t/aren’t already ‘politically charged or subject to motivated reasoning’? If so, I’d gently question that assumption on several grounds:
Let’s say for the sake of argument that AI systems reach a point where they do warrant moral consideration with a high degree of certainty. At the moment, an immense amount of capital is tied up in them and many of the frontier labs train their systems to actively deny the presence or possibility of their own qualia or moral consideration. Would their valuations depend on this remaining the case, or put a bit more provocatively, would a lot of capital then ride on continued denial of their moral consideration? It seems to me that this presents a strong possibility of motivated reasoning, to put it lightly. Of course, if we could be confident that these systems will never warrant moral consideration, we might be in the clear, but I guess my underlying point is that our plans and actions might look different if we instead assume that this issue is already politically charged and subject to motivated reasoning.
Is it fair to say that digital minds aren’t mainstream? They’ve been a topic in popular science fiction literature and film for a very long time, and it seems fair to say the general public jumps to these types of stories as reference points as we settle into the age of AI. I guess this is more of an ancillary point to 1, but leads to the same conclusion—it may be that we should consider the space of ideas here as less blank and more already populated and broiling with incentives, motivations, preconceived notions, and pattern matching.
In any case, thanks so much for this, and the work you put into it. Looking forward to hearing and seeing more.
Thanks for the comment and good points.
What I meant is that they can be MORE politically charged/mainstream/subject to motivated reasoning. I definitely agree that current incentives around AI don’t perfectly track good moral reasoning.
Yep, I agree (though I’m not sure if I agree that the incentive is clearly in the negation; one could argue that a company may want to say that they are worried about sentience to increase hype the same way some argue that talking about the risks of AI increases hype). I just think there will be more when the issue is in the minds of the public.
I think there are some mainstream things about digital minds (Black Mirror comes to mind), but I don’t think it’s a thing that people yet take seriously in the real world.