I gave two examples of the kinds of things that could convince me that it really understands shutdown: writing malicious code and spawning copies of itself in response to prompts to resist shutdown (without hinting that those are options in any way, but perhaps asking it to do something other than just try to persuade you).
I think âautonomously prove theoremsâ, âwrite entire functional apps as complex as Photoshop, could verbally explain all the consequences of being shut down and how that would impact its workâ are all very consistent with just character associations.
Iâd guess âfully automate the job of a lawyerâ means doing more than just character associations and actually having some deeper understanding of the referents, e.g. if itâs been trained to send e-mails, consult the internet, open and read documents, write documents, post things online, etc., from a general environment with access to those functions, without this looking too much like hardcoding. Then it seems to associate English language with the actual actions. This still wouldnât mean it really understood what it meant to be shut down, in particular, though. It has some understanding of the things itâs doing.
A separate question here is why we should care about whether AIs possess ârealâ understanding, if they are functionally very useful and generally competent. If we can create extremely useful AIs that automate labor on a giant scale, but are existentially safe by virtue of their lack of real understanding of the world, then we should just do that?
We should, but if that means theyâll automate less than otherwise or less efficiently than otherwise, then the short-term financial incentives could outweigh the risks to companies or governments (from their perspectives), and they could push through with risky AIs, anyway.
I gave two examples of the kinds of things that could convince me that it really understands shutdown: writing malicious code and spawning copies of itself in response to prompts to resist shutdown (without hinting that those are options in any way, but perhaps asking it to do something other than just try to persuade you).
I think âautonomously prove theoremsâ, âwrite entire functional apps as complex as Photoshop, could verbally explain all the consequences of being shut down and how that would impact its workâ are all very consistent with just character associations.
Iâd guess âfully automate the job of a lawyerâ means doing more than just character associations and actually having some deeper understanding of the referents, e.g. if itâs been trained to send e-mails, consult the internet, open and read documents, write documents, post things online, etc., from a general environment with access to those functions, without this looking too much like hardcoding. Then it seems to associate English language with the actual actions. This still wouldnât mean it really understood what it meant to be shut down, in particular, though. It has some understanding of the things itâs doing.
A separate question here is why we should care about whether AIs possess ârealâ understanding, if they are functionally very useful and generally competent. If we can create extremely useful AIs that automate labor on a giant scale, but are existentially safe by virtue of their lack of real understanding of the world, then we should just do that?
We should, but if that means theyâll automate less than otherwise or less efficiently than otherwise, then the short-term financial incentives could outweigh the risks to companies or governments (from their perspectives), and they could push through with risky AIs, anyway.