More work needs to be done on building consensus among consciousness researchers – not in finding the one right theory (plenty of people are working on that), but identifying what the community thinks it collectively knows.
I’m a bit unsure what you mean by that. If consciousness researchers continue to disagree on fundamental issues—as you argue they will in the preceding section—then it’s hard to see that there will be a consensus in the standard sense of the word.
Similarly, you write:
They need to speak from a unified and consensus-driven position.
But in the preceding section you seem to suggest that won’t be possible.
Fwiw my guess is that even in the absence of a strong expert consensus, experts will have a substantial influence over both policy and public opinion.
I was imagining that the consensus would concern conditionals. I think it is feasible to establish what sets of assumptions people might naturally make, and what views those assumptions would support. This would allow a degree of objectivity without settling the right theory. It might also involve assigning probabilities, or ranges of probabilities to view themselves, or to what it is rational for other researchers to think about different views.
So we might get something like the following (when researchers evaluate gpt6):
There are three major groups of assumptions, a, b, and c.
Experts agree that gpt6 has a 0% probability of being conscious if a is correct.
Experts agree that the rational probability to assign to gpt6 being conscious if b is correct falls between 2 and 20%.
Experts agree that the rational probability to assign to gpt6 being conscious if c is correct falls between 30 and 80%
But afaict you seem to say that the public needs to have the perception that there’s a consensus. And I’m not sure that they would if experts only agreed on such conditionals.
You’re probably right. I’m not too optimistic that my suggestion would make a big difference. But it might make some.
If a company were to announce tomorrow that it had built a conscious AI and would soon have it available for sale, I expect that it would prompt a bunch of experts to express their own opinions on twitter and journalists to contact a somewhat randomly chosen group of outspoken academics to get their perspective. I don’t think that there is any mechanism for people to get a sense of what experts really think, at least in the short run. That’s dangerous because it means that what they might hear would be somewhat arbitrary, possibly reflecting the opinion of overzealous or overcautious academics, and because it might lack authority, being the opinions of only a handful of people.
In my ideal scenario, there would be some neutral body, perhaps that did regular expert surveys, that journalists would think to talk to before publishing their pieces and that could give the sort of judgement I gestured to above. That judgement might show that most views on consciousness agree that the system is or isn’t conscious, or at least that there is significant room for doubt. People might still make up their minds, but they might entertain doubts longer, and such a body might provide incentives for companies to try harder to build more likely to be conscious systems.
I’m a bit unsure what you mean by that. If consciousness researchers continue to disagree on fundamental issues—as you argue they will in the preceding section—then it’s hard to see that there will be a consensus in the standard sense of the word.
Similarly, you write:
But in the preceding section you seem to suggest that won’t be possible.
Fwiw my guess is that even in the absence of a strong expert consensus, experts will have a substantial influence over both policy and public opinion.
I was imagining that the consensus would concern conditionals. I think it is feasible to establish what sets of assumptions people might naturally make, and what views those assumptions would support. This would allow a degree of objectivity without settling the right theory. It might also involve assigning probabilities, or ranges of probabilities to view themselves, or to what it is rational for other researchers to think about different views.
So we might get something like the following (when researchers evaluate gpt6):
There are three major groups of assumptions, a, b, and c.
Experts agree that gpt6 has a 0% probability of being conscious if a is correct.
Experts agree that the rational probability to assign to gpt6 being conscious if b is correct falls between 2 and 20%.
Experts agree that the rational probability to assign to gpt6 being conscious if c is correct falls between 30 and 80%
But afaict you seem to say that the public needs to have the perception that there’s a consensus. And I’m not sure that they would if experts only agreed on such conditionals.
You’re probably right. I’m not too optimistic that my suggestion would make a big difference. But it might make some.
If a company were to announce tomorrow that it had built a conscious AI and would soon have it available for sale, I expect that it would prompt a bunch of experts to express their own opinions on twitter and journalists to contact a somewhat randomly chosen group of outspoken academics to get their perspective. I don’t think that there is any mechanism for people to get a sense of what experts really think, at least in the short run. That’s dangerous because it means that what they might hear would be somewhat arbitrary, possibly reflecting the opinion of overzealous or overcautious academics, and because it might lack authority, being the opinions of only a handful of people.
In my ideal scenario, there would be some neutral body, perhaps that did regular expert surveys, that journalists would think to talk to before publishing their pieces and that could give the sort of judgement I gestured to above. That judgement might show that most views on consciousness agree that the system is or isn’t conscious, or at least that there is significant room for doubt. People might still make up their minds, but they might entertain doubts longer, and such a body might provide incentives for companies to try harder to build more likely to be conscious systems.