I think sentience is purely computational, it doesn’t matter what the substrate is. Suppose you are asleep. I toss a coin, heads I upload your mind into a highly realistic virtual copy of your room. Tails I leave you alone. Now I offer you some buttons that switch the paths of various trolleys in various real world trolley problems. (With a dependency on the coin flip) So if you are real, pressing the red button gains 2 util, if you are virtual, pressing costs 3 util. As you must (by the assumption the simulation is accurate) make the same decisions in reality and virtuality, then to get max util, you must act as if you are uncertain.
“I have no idea if I’m currently sentient or not” is a very odd thing to say.
Maybe it is chemical structure. Maybe a test tube full of just dopamine and nothing else is everso happy as it sits forgotten in the back shelves of a chemistry lab. Isn’t it convenient the sentient chemicals are full of carbon and get on well with human biochemistry. Like what if all the sentient chemical structures contained americium. No one would be sentient until the nuclear age, and people could make themselves a tiny bit sentient at the cost of radiation poisoning.
“But, humans could suffer digital viruses, which could be perhaps worse than the biological ones.” Its possible for the hardware to get a virus, like some modern piece of malware, that just happens to be operating on a computer running a digital mind. Its possible for nasty memes to spread. But in this context we are positing a superintelligent AI doing the security, so neither of those will happen.
Fixing digital minds is easier than fixing chemical minds, for roughly the reason fixing digital photos is easier than fixing chemical ones. With chemical photos, often you have a clear idea what you want to do, just make this area lighter, yet doing it is difficult. With chemical minds, sometimes you have a clear idea what you want to do, just reduce the level of this neurotransmitter, yet doing it is hard.
“But then, how would you differentiate a digital virus from an interaction, if both would change some aspects of the code or parameters?” If those words describe a meaningful difference, then there must be some way to tell. We are positing a superintelligence with total access to every bit flipped, so yes it can tell. “how can you tell between pictures of cats and dogs when they are both just grids of variously coloured pixels?”
“Ah, I think there is nothing beyond ‘healthy.’ Once one is unaffected by external and internal biological matters, they are healthy.”
Sure. But did you define healthy in a way that agrees with this. And wouldn’t mind uploading reduce the chance of getting cancer in the future. The AI has no reason not to apply whatever extreme tech it can to reduce the chance of you ever getting ill by another 0.0001%
But is only computational sentience computational? As in the ability to make decisions based on logic—but not making decisions based on instinct—e. g. baby turtles going to the sea without having learned such before?
Yeah! maybe high-levels of pleasure hormones just make entities feel pleasant! Versus matters not known to be associated with pleasure don’t. Although we are not certain what causes affects, some biological body changes should be needed, according to neuroscientists.
It is interesting to think what happens if you have superintelligent risky and security actors. It is possible that if security work is advanced relatively rapidly while risk activities enjoy less investments, then there is a situation with a very superintelligent AI and ‘only’ superintelligent AI, assuming equal opportunities of these two entities, risk is mitigated.
Yes, changing digital minds should be more facile because it is easily accessible (code) and understood (developed with understanding and possibly specialists responsible for parts of the code).
The meaningful difference relates to the harm vs. increased wellbeing or performance of the entity and others.
Ok, then healthy should be defined in the way of normal physical and organ function, unless otherwise preferred by the patient, while mental wellbeing is normal or high. Then, the AI would still have an incentive to reduce cancer risk but not e. g. make an adjustment when inaction falls within a medically normal range.
I think sentience is purely computational, it doesn’t matter what the substrate is. Suppose you are asleep. I toss a coin, heads I upload your mind into a highly realistic virtual copy of your room. Tails I leave you alone. Now I offer you some buttons that switch the paths of various trolleys in various real world trolley problems. (With a dependency on the coin flip) So if you are real, pressing the red button gains 2 util, if you are virtual, pressing costs 3 util. As you must (by the assumption the simulation is accurate) make the same decisions in reality and virtuality, then to get max util, you must act as if you are uncertain.
“I have no idea if I’m currently sentient or not” is a very odd thing to say.
Maybe it is chemical structure. Maybe a test tube full of just dopamine and nothing else is everso happy as it sits forgotten in the back shelves of a chemistry lab. Isn’t it convenient the sentient chemicals are full of carbon and get on well with human biochemistry. Like what if all the sentient chemical structures contained americium. No one would be sentient until the nuclear age, and people could make themselves a tiny bit sentient at the cost of radiation poisoning.
“But, humans could suffer digital viruses, which could be perhaps worse than the biological ones.” Its possible for the hardware to get a virus, like some modern piece of malware, that just happens to be operating on a computer running a digital mind. Its possible for nasty memes to spread. But in this context we are positing a superintelligent AI doing the security, so neither of those will happen.
Fixing digital minds is easier than fixing chemical minds, for roughly the reason fixing digital photos is easier than fixing chemical ones. With chemical photos, often you have a clear idea what you want to do, just make this area lighter, yet doing it is difficult. With chemical minds, sometimes you have a clear idea what you want to do, just reduce the level of this neurotransmitter, yet doing it is hard.
“But then, how would you differentiate a digital virus from an interaction, if both would change some aspects of the code or parameters?” If those words describe a meaningful difference, then there must be some way to tell. We are positing a superintelligence with total access to every bit flipped, so yes it can tell. “how can you tell between pictures of cats and dogs when they are both just grids of variously coloured pixels?”
“Ah, I think there is nothing beyond ‘healthy.’ Once one is unaffected by external and internal biological matters, they are healthy.”
Sure. But did you define healthy in a way that agrees with this. And wouldn’t mind uploading reduce the chance of getting cancer in the future. The AI has no reason not to apply whatever extreme tech it can to reduce the chance of you ever getting ill by another 0.0001%
But is only computational sentience computational? As in the ability to make decisions based on logic—but not making decisions based on instinct—e. g. baby turtles going to the sea without having learned such before?
Yeah! maybe high-levels of pleasure hormones just make entities feel pleasant! Versus matters not known to be associated with pleasure don’t. Although we are not certain what causes affects, some biological body changes should be needed, according to neuroscientists.
It is interesting to think what happens if you have superintelligent risky and security actors. It is possible that if security work is advanced relatively rapidly while risk activities enjoy less investments, then there is a situation with a very superintelligent AI and ‘only’ superintelligent AI, assuming equal opportunities of these two entities, risk is mitigated.
Yes, changing digital minds should be more facile because it is easily accessible (code) and understood (developed with understanding and possibly specialists responsible for parts of the code).
The meaningful difference relates to the harm vs. increased wellbeing or performance of the entity and others.
Ok, then healthy should be defined in the way of normal physical and organ function, unless otherwise preferred by the patient, while mental wellbeing is normal or high. Then, the AI would still have an incentive to reduce cancer risk but not e. g. make an adjustment when inaction falls within a medically normal range.