Executive summary: Non-sentient but intelligent agents (“blindminds”) may deserve moral consideration, as agency and values are more fundamental than consciousness, and excluding them could lead to harmful consequences.
Key points:
Thought experiments explore interactions between sentient and non-sentient intelligent beings, revealing potential issues with excluding non-sentient agents from moral consideration.
Practical and decision-theoretic reasons to treat non-sentient agents as morally relevant, including avoiding conflict and enabling cooperation.
Agency and values are argued to be more fundamental than consciousness; a being can have goals and make choices without subjective experience.
Excluding non-sentient AI from moral consideration could justify harmful exploitation and impede granting rights to potentially sentient AI.
Valuing agency over just sentience allows for a more inclusive ethical framework that avoids potential pitfalls of strict sentiocentrism.
The post advocates for considering both sentient and non-sentient agents in moral deliberations, forming a “Coalition of Agents” rather than just consciousness.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, andcontact us if you have feedback.
Executive summary: Non-sentient but intelligent agents (“blindminds”) may deserve moral consideration, as agency and values are more fundamental than consciousness, and excluding them could lead to harmful consequences.
Key points:
Thought experiments explore interactions between sentient and non-sentient intelligent beings, revealing potential issues with excluding non-sentient agents from moral consideration.
Practical and decision-theoretic reasons to treat non-sentient agents as morally relevant, including avoiding conflict and enabling cooperation.
Agency and values are argued to be more fundamental than consciousness; a being can have goals and make choices without subjective experience.
Excluding non-sentient AI from moral consideration could justify harmful exploitation and impede granting rights to potentially sentient AI.
Valuing agency over just sentience allows for a more inclusive ethical framework that avoids potential pitfalls of strict sentiocentrism.
The post advocates for considering both sentient and non-sentient agents in moral deliberations, forming a “Coalition of Agents” rather than just consciousness.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.