Under preference utilitarianism, it doesn’t necessarily matter whether AIs are conscious.
I’m guessing preference utilitarians would typically say that only the preferences of conscious entities matter. I doubt any of them would care about satisfying an electron’s “preference” to be near protons rather than ionized.
I’m guessing preference utilitarians would typically say that only the preferences of conscious entities matter. I doubt any of them would care about satisfying an electron’s “preference” to be near protons rather than ionized.
Perhaps. I don’t know what most preference utilitarians believe.
Are you familiar with Brian Tomasik? (He’s written about suffering of fundamental particles, and also defended preference utilitarianism.)