Thanks for this. I agree with you that AIs might simply pretend to have certain preferences without actually having them. That would avoid certain risky scenarios. But I also find it plausible that consumers would want to have AIs with truly human-like preferences (not just pretense) and that this would make it more likely that such AIs (with true human-like desires) would be created. Overall, I am very uncertain.
I agree. It may also be the case that training an AI to imitate certain preferences is far more expensive than just making it have those preferences by default, making it far more commercially viable to do the latter.
Thanks for this. I agree with you that AIs might simply pretend to have certain preferences without actually having them. That would avoid certain risky scenarios. But I also find it plausible that consumers would want to have AIs with truly human-like preferences (not just pretense) and that this would make it more likely that such AIs (with true human-like desires) would be created. Overall, I am very uncertain.
I agree. It may also be the case that training an AI to imitate certain preferences is far more expensive than just making it have those preferences by default, making it far more commercially viable to do the latter.