You don’t have to convince the general public; you have to convince the major stakeholders of tests that check for AI consciousness. It honestly seems kind of similar to what we have done for the safety of AI models but instead for the consciousness of them?
I think this is a great point, and might change my mind. However, if these consciousness evals become burdensome for AI companies, I would imagine we would need a public push in support of them in order for them to be enforced, especially through legislation. Then we get back to my dichotomy, where if people think AI is obviously conscious (whether or not it is) we might get legislation, and if they don’t, I can only imagine some companies doing it half-heartedly/​ voluntarily until it becomes too costly (as is, arguably, the current state of safety evals).
Yeah, I guess the crux here is to what extent we actually need public support or at least what type of public support that we need for it to become legislation?
If we can convince 80-90% of the experts, then I believe that this has cascading effects on the population, and it isn’t like AI being conscious is something that is impossible to believe either. I’m sure millions of students have had discussions about AI sentience for fun, and so it isn’t like fully out of the Overton window either.
I’m curious to know if you disagree with the above or if there is another reason why you think research won’t cascade to public opinion? Any examples you could point towards?
I don’t have an example to mind exactly, but I’d expect you could find one in animal welfare. Where there are agricultural interests pushing against a decision, you need a public campaign to counter them. We don’t live in technocracies—representatives need to be shown that there is a commensurate interest in favour of the animals. On less important issues/​ legislation which can be symbolic but isn’t expected to be used- experts can have a more of a role. I’d expect that the former category is the more important one for digital minds. Does that make sense? I’m aware its a bit too stark of a dichotomy to be true.
There’s this idea of the truth as an asymmetric weapon; I guess my point isn’t necessarily that the approach vector will be something like: Expert discussion → Policy change
but rather something like Experts discussion → Public opinion change → Policy Change
You could say something about memetics and that it is the most understandable memes that get passed down rather than the truth, which is, to some extent, fair. I guess I’m a believer that the world can be updated based on expert opinion.
For example, I’ve noticed a trend in the AI Safety debate: the quality seems to get better and more nuanced over time (at least, IMO). I’m not sure what this entails for the general public’s understanding of this topic but it feels like it affects the policy makers.
You could say something about memetics and that it is the most understandable memes that get passed down rather than the truth, which is, to some extent, fair. I guess I’m a believer that the world can be updated based on expert opinion.
I think this is a good description of the kind of scepticism I’m attracted to, perhaps to an irrational degree. Thanks for describing it!
I like your point about AI Safety. It seems at least a bit true.
I’ll update my vote on the banner to be a bit less sceptical- I think my scepticism of the potential for us to know whether AI is conscious is a major part of my disagreement with the debate statement. I don’t endorse the level of scepticism I hold. Thanks!
I think this is a great point, and might change my mind. However, if these consciousness evals become burdensome for AI companies, I would imagine we would need a public push in support of them in order for them to be enforced, especially through legislation. Then we get back to my dichotomy, where if people think AI is obviously conscious (whether or not it is) we might get legislation, and if they don’t, I can only imagine some companies doing it half-heartedly/​ voluntarily until it becomes too costly (as is, arguably, the current state of safety evals).
Yeah, I guess the crux here is to what extent we actually need public support or at least what type of public support that we need for it to become legislation?
If we can convince 80-90% of the experts, then I believe that this has cascading effects on the population, and it isn’t like AI being conscious is something that is impossible to believe either.
I’m sure millions of students have had discussions about AI sentience for fun, and so it isn’t like fully out of the Overton window either.
I’m curious to know if you disagree with the above or if there is another reason why you think research won’t cascade to public opinion? Any examples you could point towards?
I don’t have an example to mind exactly, but I’d expect you could find one in animal welfare. Where there are agricultural interests pushing against a decision, you need a public campaign to counter them. We don’t live in technocracies—representatives need to be shown that there is a commensurate interest in favour of the animals. On less important issues/​ legislation which can be symbolic but isn’t expected to be used- experts can have a more of a role. I’d expect that the former category is the more important one for digital minds. Does that make sense? I’m aware its a bit too stark of a dichotomy to be true.
There’s this idea of the truth as an asymmetric weapon; I guess my point isn’t necessarily that the approach vector will be something like:
Expert discussion → Policy change
but rather something like
Experts discussion → Public opinion change → Policy Change
You could say something about memetics and that it is the most understandable memes that get passed down rather than the truth, which is, to some extent, fair. I guess I’m a believer that the world can be updated based on expert opinion.
For example, I’ve noticed a trend in the AI Safety debate: the quality seems to get better and more nuanced over time (at least, IMO). I’m not sure what this entails for the general public’s understanding of this topic but it feels like it affects the policy makers.
I think this is a good description of the kind of scepticism I’m attracted to, perhaps to an irrational degree. Thanks for describing it!
I like your point about AI Safety. It seems at least a bit true.
I’ll update my vote on the banner to be a bit less sceptical- I think my scepticism of the potential for us to know whether AI is conscious is a major part of my disagreement with the debate statement. I don’t endorse the level of scepticism I hold. Thanks!