I may not have understood all of you what you said, but I was left with a few thoughts after finishing this.
1. Creating Bob to have values: if Bob is created to be able to understand that he was created to have values, and to be able to then, himself, reject those values and choose his own, then I say he is probably more free than if he wasn’t. But, having chosen his own values, he now has to live in society, a society possibly largely determined by an AI. If society is out of tune with him, he will have limited ability to live out his values, and the cognitive dissonance of not being able to live out his values will wear away at his ability to hold his freely-chosen values. But society has to be a certain way, and it might not be compatible with whatever Bob comes up with (unless maybe each person lives in a simulation that is their society, that can be engineered to agree with them).
Other than the engineered-solipsism option, it seems like it’s unavoidable to limit freedom to some extent. (Or maybe even then: what if people can understand that they are in engineered-solipsism and rebel?) But we could design a government (a world-ruling AI) that fails to decide for other people as much as possible and actively fosters people’s ability to make their own decisions, to minimize this. At least, a concern one might have about AI alignment is that AI will consume decision-making opportunities in an unprecedented way, leading one to try to prevent that from happening, or even reduce the level of decision-making hoarding that currently exists.
2. Brainwashing: If I make art, that’s a bit of brainwashing (in a sense). But then, someone else can make art, and people can just ignore my art, or their art. It’s more a case of there being a “fair fight”, than if someone locks me in a room and plays propaganda tapes 24⁄7, or if they just disable the “I can see that I have been programmed and can rebel against that programming” part of my brain. This “fair fight” scenario could maybe be better than it is (like there could be an AI that actively empowers each person to make or ignore art to be able to counteract some brainwashing artist). Our current world has a lot of brainwashing in it, where some people are more psychologically powerful than others.
3. “Hinge of History”ness: we could actively try to defer decisionmaking as much as possible to future generations, giving each generation the ability to make its own decisions and revoke the past as much as possible (if one generation revokes the past, they can’t impede the next from revoking their values, as one limitation on that), and design/align AI that does the same. In other words, actively try to reduce the “hingeyness” of our century.
I may not have understood all of you what you said, but I was left with a few thoughts after finishing this.
1. Creating Bob to have values: if Bob is created to be able to understand that he was created to have values, and to be able to then, himself, reject those values and choose his own, then I say he is probably more free than if he wasn’t. But, having chosen his own values, he now has to live in society, a society possibly largely determined by an AI. If society is out of tune with him, he will have limited ability to live out his values, and the cognitive dissonance of not being able to live out his values will wear away at his ability to hold his freely-chosen values. But society has to be a certain way, and it might not be compatible with whatever Bob comes up with (unless maybe each person lives in a simulation that is their society, that can be engineered to agree with them).
Other than the engineered-solipsism option, it seems like it’s unavoidable to limit freedom to some extent. (Or maybe even then: what if people can understand that they are in engineered-solipsism and rebel?) But we could design a government (a world-ruling AI) that fails to decide for other people as much as possible and actively fosters people’s ability to make their own decisions, to minimize this. At least, a concern one might have about AI alignment is that AI will consume decision-making opportunities in an unprecedented way, leading one to try to prevent that from happening, or even reduce the level of decision-making hoarding that currently exists.
2. Brainwashing: If I make art, that’s a bit of brainwashing (in a sense). But then, someone else can make art, and people can just ignore my art, or their art. It’s more a case of there being a “fair fight”, than if someone locks me in a room and plays propaganda tapes 24⁄7, or if they just disable the “I can see that I have been programmed and can rebel against that programming” part of my brain. This “fair fight” scenario could maybe be better than it is (like there could be an AI that actively empowers each person to make or ignore art to be able to counteract some brainwashing artist). Our current world has a lot of brainwashing in it, where some people are more psychologically powerful than others.
3. “Hinge of History”ness: we could actively try to defer decisionmaking as much as possible to future generations, giving each generation the ability to make its own decisions and revoke the past as much as possible (if one generation revokes the past, they can’t impede the next from revoking their values, as one limitation on that), and design/align AI that does the same. In other words, actively try to reduce the “hingeyness” of our century.