Harris and Raskin talked about the risk that AI partners will be used for “product placement” or political manipulation here, but I’m sceptical about this. These AI partners will surely have a subscription business model rather than a freemium model, and, given how user trust will be extremely important for these businesses, I don’t think they will try to manipulate the users in this way.
More broadly speaking, values will surely change, there is no doubt about that. The very value of “human connection” and “human relationships” is eroded by definition if people are in AI relationships. A priori, I don’t think value drift is a bad thing. But in this particular case, this value change will inevitably go along with the reduction of the population, which is a bad thing (according to my ethics, and the ethics of most other people, I believe).
Harris and Raskin talked about the risk that AI partners will be used for “product placement” or political manipulation here, but I’m sceptical about this. These AI partners will surely have a subscription business model rather than a freemium model, and, given how user trust will be extremely important for these businesses, I don’t think they will try to manipulate the users in this way.
More broadly speaking, values will surely change, there is no doubt about that. The very value of “human connection” and “human relationships” is eroded by definition if people are in AI relationships. A priori, I don’t think value drift is a bad thing. But in this particular case, this value change will inevitably go along with the reduction of the population, which is a bad thing (according to my ethics, and the ethics of most other people, I believe).