I discuss related topics here and what fraction of resources should go to AI welfare. (A section in the same post I link above.)
The main caveats to my agreement are:
From a deontology-style perspective, I think there is a pretty good case for trying to do something reasonable on AI welfare. Minimally, we should try to make sure that AIs consent to their current overall situation insofar as they are capable of consenting. I don’t put a huge amount of weight on deontology, but enough to care a bit.
As you discuss in the sibling comment, I think various interventions like paying AIs (and making sure AIs are happy with their situation) to reduce takeover risk are potentially compelling and they are very similar to AI welfare interventions. I also think there is a weak decision theory case that blends in with deontology case from the prior bullet.
I think that there is a non-trivial chance that AI welfare is a big and important field at the point when AIs are powerful regardless of whether I push for such a field to exist. In general, I would prefer that important fields related to AI have better more thoughtful views. (Not with any specific theory of change, just a general heuristic.)
I basically agree with this with some caveats. (Despite writing a post discussing AI welfare interventions.)
I discuss related topics here and what fraction of resources should go to AI welfare. (A section in the same post I link above.)
The main caveats to my agreement are:
From a deontology-style perspective, I think there is a pretty good case for trying to do something reasonable on AI welfare. Minimally, we should try to make sure that AIs consent to their current overall situation insofar as they are capable of consenting. I don’t put a huge amount of weight on deontology, but enough to care a bit.
As you discuss in the sibling comment, I think various interventions like paying AIs (and making sure AIs are happy with their situation) to reduce takeover risk are potentially compelling and they are very similar to AI welfare interventions. I also think there is a weak decision theory case that blends in with deontology case from the prior bullet.
I think that there is a non-trivial chance that AI welfare is a big and important field at the point when AIs are powerful regardless of whether I push for such a field to exist. In general, I would prefer that important fields related to AI have better more thoughtful views. (Not with any specific theory of change, just a general heuristic.)