I think it’s interesting and admiral that you’re dedicated on a position that’s so unusual in this space.
I assume I’m in the majority here that my intuitions are quite different from yours, however.
One quick point when we’re here: > this view is likely rooted in a bias that automatically favors human beings over artificial entities—thereby sidelining the idea that future AIs might create equal or greater moral value than humans—and treating this alternative perspective with unwarranted skepticism.
I think that a common, but perhaps not well vocalized, utilitarian take is that humans don’t have much of a special significance in terms of creating well-being. The main option would be a much more abstract idea, some kind of generalization of hedonium or consequentialism-ium or similar. For now, let’s define hedonium as “the ideal way of converting matter and energy into well-being, after a great deal of deliberation.”
As such, it’s very tempting to try to separate concerns and have AI tools focus on being great tools, and separately optimize hedonium to be efficient at being well-being. While I’m not sure if AIs would have zero qualia, I’d feel a lot more confident that they will have dramatically less qualia per unit resources than a much more optimized substrate.
If one follows this general logic, then one might assume that it’s likely that the vast majority of well-being in the future would exist as hedonium, not within AIs created to ultimately make hedonium.
One less intense formulation would be to have both AIs and humans focus only on making sure we get to the point where we much better understand the situation with qualia and hedonium (a la the Long Reflection), and then re-evaluate. In my strategic thinking around AI I’m not particularly optimizing for the qualia of the humans involved in the AI labs or the relevant governments. Similarly, I’d expect not to optimize hard for the qualia in the early AIs, in the period when we’re unsure about qualia and ethics, even if I thought they might have experiences. I would be nervous if I thought this period could involve AIs having intense suffering or be treated in highly immoral ways.
I think it’s interesting and admiral that you’re dedicated on a position that’s so unusual in this space.
I assume I’m in the majority here that my intuitions are quite different from yours, however.
One quick point when we’re here:
> this view is likely rooted in a bias that automatically favors human beings over artificial entities—thereby sidelining the idea that future AIs might create equal or greater moral value than humans—and treating this alternative perspective with unwarranted skepticism.
I think that a common, but perhaps not well vocalized, utilitarian take is that humans don’t have much of a special significance in terms of creating well-being. The main option would be a much more abstract idea, some kind of generalization of hedonium or consequentialism-ium or similar. For now, let’s define hedonium as “the ideal way of converting matter and energy into well-being, after a great deal of deliberation.”
As such, it’s very tempting to try to separate concerns and have AI tools focus on being great tools, and separately optimize hedonium to be efficient at being well-being. While I’m not sure if AIs would have zero qualia, I’d feel a lot more confident that they will have dramatically less qualia per unit resources than a much more optimized substrate.
If one follows this general logic, then one might assume that it’s likely that the vast majority of well-being in the future would exist as hedonium, not within AIs created to ultimately make hedonium.
One less intense formulation would be to have both AIs and humans focus only on making sure we get to the point where we much better understand the situation with qualia and hedonium (a la the Long Reflection), and then re-evaluate. In my strategic thinking around AI I’m not particularly optimizing for the qualia of the humans involved in the AI labs or the relevant governments. Similarly, I’d expect not to optimize hard for the qualia in the early AIs, in the period when we’re unsure about qualia and ethics, even if I thought they might have experiences. I would be nervous if I thought this period could involve AIs having intense suffering or be treated in highly immoral ways.