I was imagining that the consensus would concern conditionals. I think it is feasible to establish what sets of assumptions people might naturally make, and what views those assumptions would support. This would allow a degree of objectivity without settling the right theory. It might also involve assigning probabilities, or ranges of probabilities to view themselves, or to what it is rational for other researchers to think about different views.
So we might get something like the following (when researchers evaluate gpt6):
There are three major groups of assumptions, a, b, and c.
Experts agree that gpt6 has a 0% probability of being conscious if a is correct.
Experts agree that the rational probability to assign to gpt6 being conscious if b is correct falls between 2 and 20%.
Experts agree that the rational probability to assign to gpt6 being conscious if c is correct falls between 30 and 80%
But afaict you seem to say that the public needs to have the perception that there’s a consensus. And I’m not sure that they would if experts only agreed on such conditionals.
You’re probably right. I’m not too optimistic that my suggestion would make a big difference. But it might make some.
If a company were to announce tomorrow that it had built a conscious AI and would soon have it available for sale, I expect that it would prompt a bunch of experts to express their own opinions on twitter and journalists to contact a somewhat randomly chosen group of outspoken academics to get their perspective. I don’t think that there is any mechanism for people to get a sense of what experts really think, at least in the short run. That’s dangerous because it means that what they might hear would be somewhat arbitrary, possibly reflecting the opinion of overzealous or overcautious academics, and because it might lack authority, being the opinions of only a handful of people.
In my ideal scenario, there would be some neutral body, perhaps that did regular expert surveys, that journalists would think to talk to before publishing their pieces and that could give the sort of judgement I gestured to above. That judgement might show that most views on consciousness agree that the system is or isn’t conscious, or at least that there is significant room for doubt. People might still make up their minds, but they might entertain doubts longer, and such a body might provide incentives for companies to try harder to build more likely to be conscious systems.
I was imagining that the consensus would concern conditionals. I think it is feasible to establish what sets of assumptions people might naturally make, and what views those assumptions would support. This would allow a degree of objectivity without settling the right theory. It might also involve assigning probabilities, or ranges of probabilities to view themselves, or to what it is rational for other researchers to think about different views.
So we might get something like the following (when researchers evaluate gpt6):
There are three major groups of assumptions, a, b, and c.
Experts agree that gpt6 has a 0% probability of being conscious if a is correct.
Experts agree that the rational probability to assign to gpt6 being conscious if b is correct falls between 2 and 20%.
Experts agree that the rational probability to assign to gpt6 being conscious if c is correct falls between 30 and 80%
But afaict you seem to say that the public needs to have the perception that there’s a consensus. And I’m not sure that they would if experts only agreed on such conditionals.
You’re probably right. I’m not too optimistic that my suggestion would make a big difference. But it might make some.
If a company were to announce tomorrow that it had built a conscious AI and would soon have it available for sale, I expect that it would prompt a bunch of experts to express their own opinions on twitter and journalists to contact a somewhat randomly chosen group of outspoken academics to get their perspective. I don’t think that there is any mechanism for people to get a sense of what experts really think, at least in the short run. That’s dangerous because it means that what they might hear would be somewhat arbitrary, possibly reflecting the opinion of overzealous or overcautious academics, and because it might lack authority, being the opinions of only a handful of people.
In my ideal scenario, there would be some neutral body, perhaps that did regular expert surveys, that journalists would think to talk to before publishing their pieces and that could give the sort of judgement I gestured to above. That judgement might show that most views on consciousness agree that the system is or isn’t conscious, or at least that there is significant room for doubt. People might still make up their minds, but they might entertain doubts longer, and such a body might provide incentives for companies to try harder to build more likely to be conscious systems.