My guess now of where we most disagree is regarding the value of a world where AIs disempower humanity and go onto have a vast technologically super-advanced, rapidly expanding civilisation. I think this would quite likely be ~0 value since we don’t understand consciousness at all really, and my guess is that AIs aren’t yet conscious and if we relatively quickly get to TAI in the current paradigm they probably still won’t be moral patients. As a sentientist I don’t really care whether there is a huge future if humans (or something sufficiently related to humans e.g. we carefully study consciousness for a millennium and create digital people we are very confident have morally important experiences to be our successors) aren’t in it.
So yes I agree frontier AI models are where the most transformative potential lies, but I would prefer to get there far later once we understand alignment and consciousness far better (while other less important tech progress continues in the meantime).
My guess now of where we most disagree is regarding the value of a world where AIs disempower humanity and go onto have a vast technologically super-advanced, rapidly expanding civilisation. I think this would quite likely be ~0 value since we don’t understand consciousness at all really, and my guess is that AIs aren’t yet conscious and if we relatively quickly get to TAI in the current paradigm they probably still won’t be moral patients.
Thanks. I disagree with this for the following reasons:
AIs will get more complex over time, even in our current paradigm. Eventually I expect AIs will have highly sophisticated cognition that I’d feel comfortable calling conscious, on our current path of development (I’m an illusionist about phenomenal consciousness so I don’t think there’s a fact of the matter anyway).
If we slowed down AI, I don’t think that would necessarily translate into a higher likelihood that future AIs will be conscious. Why would it?
In the absence of a strong argument that slowing down AI makes future AIs more likely to be conscious, I still think the considerations I mentioned are stronger than the counter-considerations you’ve mentioned here, and I think they should push us towards trying to avoid entrenching norms that could hamper future growth and innovation.
My guess now of where we most disagree is regarding the value of a world where AIs disempower humanity and go onto have a vast technologically super-advanced, rapidly expanding civilisation. I think this would quite likely be ~0 value since we don’t understand consciousness at all really, and my guess is that AIs aren’t yet conscious and if we relatively quickly get to TAI in the current paradigm they probably still won’t be moral patients. As a sentientist I don’t really care whether there is a huge future if humans (or something sufficiently related to humans e.g. we carefully study consciousness for a millennium and create digital people we are very confident have morally important experiences to be our successors) aren’t in it.
So yes I agree frontier AI models are where the most transformative potential lies, but I would prefer to get there far later once we understand alignment and consciousness far better (while other less important tech progress continues in the meantime).
Thanks. I disagree with this for the following reasons:
AIs will get more complex over time, even in our current paradigm. Eventually I expect AIs will have highly sophisticated cognition that I’d feel comfortable calling conscious, on our current path of development (I’m an illusionist about phenomenal consciousness so I don’t think there’s a fact of the matter anyway).
If we slowed down AI, I don’t think that would necessarily translate into a higher likelihood that future AIs will be conscious. Why would it?
In the absence of a strong argument that slowing down AI makes future AIs more likely to be conscious, I still think the considerations I mentioned are stronger than the counter-considerations you’ve mentioned here, and I think they should push us towards trying to avoid entrenching norms that could hamper future growth and innovation.