“…there is general agreement that current and foreseeable AI systems do not have what it takes to be responsible for their actions (moral agents), or to be systems that humans should have responsibility towards (moral patients).
Seems false, unless he’s using “general agreement” and “foreseeable” in some very narrow sense?
There are a variety of views on the potential moral status of AI/robots/machines into the future.
With a quick search it seems there are arguments for moral agency if functionality is equivalent to humans, or when/if they become capable of moral reasoning and decision-making. Others argue that consciousness is essential for moral agency and that the current AI paradigm is insufficient to generate consciousness.
Seems false, unless he’s using “general agreement” and “foreseeable” in some very narrow sense?
There are a variety of views on the potential moral status of AI/robots/machines into the future.
With a quick search it seems there are arguments for moral agency if functionality is equivalent to humans, or when/if they become capable of moral reasoning and decision-making. Others argue that consciousness is essential for moral agency and that the current AI paradigm is insufficient to generate consciousness.
I was also interested to follow this up. For the source of this claim he cites another article he has written ‘Is it time for robot rights? Moral status in artificial entities’ (https://link.springer.com/content/pdf/10.1007/s10676-021-09596-w.pdf).