But is only computational sentience computational? As in the ability to make decisions based on logic—but not making decisions based on instinct—e. g. baby turtles going to the sea without having learned such before?
Yeah! maybe high-levels of pleasure hormones just make entities feel pleasant! Versus matters not known to be associated with pleasure don’t. Although we are not certain what causes affects, some biological body changes should be needed, according to neuroscientists.
It is interesting to think what happens if you have superintelligent risky and security actors. It is possible that if security work is advanced relatively rapidly while risk activities enjoy less investments, then there is a situation with a very superintelligent AI and ‘only’ superintelligent AI, assuming equal opportunities of these two entities, risk is mitigated.
Yes, changing digital minds should be more facile because it is easily accessible (code) and understood (developed with understanding and possibly specialists responsible for parts of the code).
The meaningful difference relates to the harm vs. increased wellbeing or performance of the entity and others.
Ok, then healthy should be defined in the way of normal physical and organ function, unless otherwise preferred by the patient, while mental wellbeing is normal or high. Then, the AI would still have an incentive to reduce cancer risk but not e. g. make an adjustment when inaction falls within a medically normal range.
But is only computational sentience computational? As in the ability to make decisions based on logic—but not making decisions based on instinct—e. g. baby turtles going to the sea without having learned such before?
Yeah! maybe high-levels of pleasure hormones just make entities feel pleasant! Versus matters not known to be associated with pleasure don’t. Although we are not certain what causes affects, some biological body changes should be needed, according to neuroscientists.
It is interesting to think what happens if you have superintelligent risky and security actors. It is possible that if security work is advanced relatively rapidly while risk activities enjoy less investments, then there is a situation with a very superintelligent AI and ‘only’ superintelligent AI, assuming equal opportunities of these two entities, risk is mitigated.
Yes, changing digital minds should be more facile because it is easily accessible (code) and understood (developed with understanding and possibly specialists responsible for parts of the code).
The meaningful difference relates to the harm vs. increased wellbeing or performance of the entity and others.
Ok, then healthy should be defined in the way of normal physical and organ function, unless otherwise preferred by the patient, while mental wellbeing is normal or high. Then, the AI would still have an incentive to reduce cancer risk but not e. g. make an adjustment when inaction falls within a medically normal range.