You touched on this a lot in your report but I don’t think you went all the way unless I missed something—what do you think a computer program would need to have in order to exhibit “minimally viable consciousness” in your view?
I’m not sure. My best guess would be something in the direction of the GDAK-inspired cognitive architecture discussed in section 6.2.4, but I don’t yet have a clear picture of all the cognitive features that would need to be working together in a certain way for me to be even moderately confident the program is “conscious” in the sense defined in the report. I’m pretty sure none of the programs I described in the report are conscious (in the sense defined in the report).
You touched on this a lot in your report but I don’t think you went all the way unless I missed something—what do you think a computer program would need to have in order to exhibit “minimally viable consciousness” in your view?
I’m not sure. My best guess would be something in the direction of the GDAK-inspired cognitive architecture discussed in section 6.2.4, but I don’t yet have a clear picture of all the cognitive features that would need to be working together in a certain way for me to be even moderately confident the program is “conscious” in the sense defined in the report. I’m pretty sure none of the programs I described in the report are conscious (in the sense defined in the report).