I’d not like to ingest nanobots which would be something like a worm infection but worse!
For a huge range of goals, the optimum answer involves some kind of nanobot. (Unless even deeper magic tech exists). If you want a person to be healthy, the nanobots can make them healthier than any good nutrition.
The idea I was getting at is that asking an AI for better nutrition, meant the way you mean it, is greatly limiting the options for what you actually want. Suppose you walk a lot to get to places, and your shoes are falling apart. You ask the AI for new shoes, when it could have given you a fancy car. By limiting the AI to “choice of food” rather than “choice of every arrangement of atoms allowed by physics” you are greatly reducing the amount the AI can optimize your health.
Oh yeah, that makes sense. And if humans can’t imagine what super-healthy is then they need to defer to AGI—but should not misspecify what they meant ..
I don’t think the human difficulty imagining what super-healthy is is the reason the AI needs nanobots. A person who is say bulletproof is easy to imagine, and probably not achievable with just good nutrition, but is achievable with nanobots. The same goes for biology that is virus proof, cancer proof etc.
I can imagine mind uploading quite easily.
There may be some “super-healthy” so weird and extreme that I can’t imagine it. But there is already a bunch of weird extreme stuff I can imagine.
OK! You mean super-healthy as resilient to biological illnesses or perhaps processes (such as aging).
Nanobots would probably work but mind uploading could be easier since biological bodies would not need to be kept up.
While physical illness would not be possible in the digital world, mental health issues could occur. There should be a way to isolate only positive emotions. But, I still think that actions could be performed and emotions exhibited but nothing would be felt by entities that do not have structures similar to those in human brain that biologically/chemically process emotions. Do you think that a silicon-based machine that incorporates specific chemical structures could be sentient?
Ah, I think there is nothing beyond ‘healthy.’ Once one is unaffected by external and internal biological matters, they are healthy. Traditional physical competition would probably not make sense in the digital world. For example, high jump. But, humans could suffer digital viruses, which could be perhaps worse than the biological ones. But then, how would you differentiate a digital virus from an interaction, if both would change some aspects of the code or parameters?
I think sentience is purely computational, it doesn’t matter what the substrate is. Suppose you are asleep. I toss a coin, heads I upload your mind into a highly realistic virtual copy of your room. Tails I leave you alone. Now I offer you some buttons that switch the paths of various trolleys in various real world trolley problems. (With a dependency on the coin flip) So if you are real, pressing the red button gains 2 util, if you are virtual, pressing costs 3 util. As you must (by the assumption the simulation is accurate) make the same decisions in reality and virtuality, then to get max util, you must act as if you are uncertain.
“I have no idea if I’m currently sentient or not” is a very odd thing to say.
Maybe it is chemical structure. Maybe a test tube full of just dopamine and nothing else is everso happy as it sits forgotten in the back shelves of a chemistry lab. Isn’t it convenient the sentient chemicals are full of carbon and get on well with human biochemistry. Like what if all the sentient chemical structures contained americium. No one would be sentient until the nuclear age, and people could make themselves a tiny bit sentient at the cost of radiation poisoning.
“But, humans could suffer digital viruses, which could be perhaps worse than the biological ones.” Its possible for the hardware to get a virus, like some modern piece of malware, that just happens to be operating on a computer running a digital mind. Its possible for nasty memes to spread. But in this context we are positing a superintelligent AI doing the security, so neither of those will happen.
Fixing digital minds is easier than fixing chemical minds, for roughly the reason fixing digital photos is easier than fixing chemical ones. With chemical photos, often you have a clear idea what you want to do, just make this area lighter, yet doing it is difficult. With chemical minds, sometimes you have a clear idea what you want to do, just reduce the level of this neurotransmitter, yet doing it is hard.
“But then, how would you differentiate a digital virus from an interaction, if both would change some aspects of the code or parameters?” If those words describe a meaningful difference, then there must be some way to tell. We are positing a superintelligence with total access to every bit flipped, so yes it can tell. “how can you tell between pictures of cats and dogs when they are both just grids of variously coloured pixels?”
“Ah, I think there is nothing beyond ‘healthy.’ Once one is unaffected by external and internal biological matters, they are healthy.”
Sure. But did you define healthy in a way that agrees with this. And wouldn’t mind uploading reduce the chance of getting cancer in the future. The AI has no reason not to apply whatever extreme tech it can to reduce the chance of you ever getting ill by another 0.0001%
But is only computational sentience computational? As in the ability to make decisions based on logic—but not making decisions based on instinct—e. g. baby turtles going to the sea without having learned such before?
Yeah! maybe high-levels of pleasure hormones just make entities feel pleasant! Versus matters not known to be associated with pleasure don’t. Although we are not certain what causes affects, some biological body changes should be needed, according to neuroscientists.
It is interesting to think what happens if you have superintelligent risky and security actors. It is possible that if security work is advanced relatively rapidly while risk activities enjoy less investments, then there is a situation with a very superintelligent AI and ‘only’ superintelligent AI, assuming equal opportunities of these two entities, risk is mitigated.
Yes, changing digital minds should be more facile because it is easily accessible (code) and understood (developed with understanding and possibly specialists responsible for parts of the code).
The meaningful difference relates to the harm vs. increased wellbeing or performance of the entity and others.
Ok, then healthy should be defined in the way of normal physical and organ function, unless otherwise preferred by the patient, while mental wellbeing is normal or high. Then, the AI would still have an incentive to reduce cancer risk but not e. g. make an adjustment when inaction falls within a medically normal range.
For a huge range of goals, the optimum answer involves some kind of nanobot. (Unless even deeper magic tech exists). If you want a person to be healthy, the nanobots can make them healthier than any good nutrition.
The idea I was getting at is that asking an AI for better nutrition, meant the way you mean it, is greatly limiting the options for what you actually want. Suppose you walk a lot to get to places, and your shoes are falling apart. You ask the AI for new shoes, when it could have given you a fancy car. By limiting the AI to “choice of food” rather than “choice of every arrangement of atoms allowed by physics” you are greatly reducing the amount the AI can optimize your health.
Oh yeah, that makes sense. And if humans can’t imagine what super-healthy is then they need to defer to AGI—but should not misspecify what they meant ..
I don’t think the human difficulty imagining what super-healthy is is the reason the AI needs nanobots. A person who is say bulletproof is easy to imagine, and probably not achievable with just good nutrition, but is achievable with nanobots. The same goes for biology that is virus proof, cancer proof etc.
I can imagine mind uploading quite easily.
There may be some “super-healthy” so weird and extreme that I can’t imagine it. But there is already a bunch of weird extreme stuff I can imagine.
OK! You mean super-healthy as resilient to biological illnesses or perhaps processes (such as aging).
Nanobots would probably work but mind uploading could be easier since biological bodies would not need to be kept up.
While physical illness would not be possible in the digital world, mental health issues could occur. There should be a way to isolate only positive emotions. But, I still think that actions could be performed and emotions exhibited but nothing would be felt by entities that do not have structures similar to those in human brain that biologically/chemically process emotions. Do you think that a silicon-based machine that incorporates specific chemical structures could be sentient?
Ah, I think there is nothing beyond ‘healthy.’ Once one is unaffected by external and internal biological matters, they are healthy. Traditional physical competition would probably not make sense in the digital world. For example, high jump. But, humans could suffer digital viruses, which could be perhaps worse than the biological ones. But then, how would you differentiate a digital virus from an interaction, if both would change some aspects of the code or parameters?
I think sentience is purely computational, it doesn’t matter what the substrate is. Suppose you are asleep. I toss a coin, heads I upload your mind into a highly realistic virtual copy of your room. Tails I leave you alone. Now I offer you some buttons that switch the paths of various trolleys in various real world trolley problems. (With a dependency on the coin flip) So if you are real, pressing the red button gains 2 util, if you are virtual, pressing costs 3 util. As you must (by the assumption the simulation is accurate) make the same decisions in reality and virtuality, then to get max util, you must act as if you are uncertain.
“I have no idea if I’m currently sentient or not” is a very odd thing to say.
Maybe it is chemical structure. Maybe a test tube full of just dopamine and nothing else is everso happy as it sits forgotten in the back shelves of a chemistry lab. Isn’t it convenient the sentient chemicals are full of carbon and get on well with human biochemistry. Like what if all the sentient chemical structures contained americium. No one would be sentient until the nuclear age, and people could make themselves a tiny bit sentient at the cost of radiation poisoning.
“But, humans could suffer digital viruses, which could be perhaps worse than the biological ones.” Its possible for the hardware to get a virus, like some modern piece of malware, that just happens to be operating on a computer running a digital mind. Its possible for nasty memes to spread. But in this context we are positing a superintelligent AI doing the security, so neither of those will happen.
Fixing digital minds is easier than fixing chemical minds, for roughly the reason fixing digital photos is easier than fixing chemical ones. With chemical photos, often you have a clear idea what you want to do, just make this area lighter, yet doing it is difficult. With chemical minds, sometimes you have a clear idea what you want to do, just reduce the level of this neurotransmitter, yet doing it is hard.
“But then, how would you differentiate a digital virus from an interaction, if both would change some aspects of the code or parameters?” If those words describe a meaningful difference, then there must be some way to tell. We are positing a superintelligence with total access to every bit flipped, so yes it can tell. “how can you tell between pictures of cats and dogs when they are both just grids of variously coloured pixels?”
“Ah, I think there is nothing beyond ‘healthy.’ Once one is unaffected by external and internal biological matters, they are healthy.”
Sure. But did you define healthy in a way that agrees with this. And wouldn’t mind uploading reduce the chance of getting cancer in the future. The AI has no reason not to apply whatever extreme tech it can to reduce the chance of you ever getting ill by another 0.0001%
But is only computational sentience computational? As in the ability to make decisions based on logic—but not making decisions based on instinct—e. g. baby turtles going to the sea without having learned such before?
Yeah! maybe high-levels of pleasure hormones just make entities feel pleasant! Versus matters not known to be associated with pleasure don’t. Although we are not certain what causes affects, some biological body changes should be needed, according to neuroscientists.
It is interesting to think what happens if you have superintelligent risky and security actors. It is possible that if security work is advanced relatively rapidly while risk activities enjoy less investments, then there is a situation with a very superintelligent AI and ‘only’ superintelligent AI, assuming equal opportunities of these two entities, risk is mitigated.
Yes, changing digital minds should be more facile because it is easily accessible (code) and understood (developed with understanding and possibly specialists responsible for parts of the code).
The meaningful difference relates to the harm vs. increased wellbeing or performance of the entity and others.
Ok, then healthy should be defined in the way of normal physical and organ function, unless otherwise preferred by the patient, while mental wellbeing is normal or high. Then, the AI would still have an incentive to reduce cancer risk but not e. g. make an adjustment when inaction falls within a medically normal range.