One thing I could imagine happening in these situations is that people close themselves off to object level arguments to a degree, and maybe for (somewhat) good reason.
to the general public, the idea of AI being a serious (existential) risk is probably still very weird
people may have an impression that believing in such things correlates with being gullible
people may be hesitant towards “being convinced” of something they haven’t fully thought through themselves
I remember once when I was younger talking to a Christian fanatic of sorts, who kept coming up with new arguments for why the bible must obviously be true due to the many correct predictions it has apparently made, plus some argument about irreducible complexity. In the moment, I couldn’t really tell if/where/why his arguments failed. I found them somewhat hard to follow and just knew the conclusion would be something that is both weird and highly unlikely (for reasons other than his concrete arguments). So my impression then was “there surely is something wrong about his claims, but in this very moment I’m lacking the means to identify the weaknesses”.
I sometimes find myself in similar situations when some person tries to get me to sign something or to buy some product they’re offering. They tend to make very convincing arguments about why I should definitely do it. I often have no good arguments against that. Still, I tend to resist many of these situations because I haven’t yet heard or had a chance to find the best counter arguments.
When somebody who has thought a lot about AI safety and is very convinced of its importance talks to people to whom this whole area is new and strange, I can imagine similar defenses being present. If this is true, more/better/different arguments may not necessarily be helpful to begin with. Some things that could help:
social proof (“these well respected people and organizations think this is important”)
slightly weaker claims that people have an easier time agreeing with
maybe some meta level argument about why the unintuitive-ness is misguided (although this could probably also taken as an attack)
One thing I could imagine happening in these situations is that people close themselves off to object level arguments to a degree, and maybe for (somewhat) good reason.
to the general public, the idea of AI being a serious (existential) risk is probably still very weird
people may have an impression that believing in such things correlates with being gullible
people may be hesitant towards “being convinced” of something they haven’t fully thought through themselves
I remember once when I was younger talking to a Christian fanatic of sorts, who kept coming up with new arguments for why the bible must obviously be true due to the many correct predictions it has apparently made, plus some argument about irreducible complexity. In the moment, I couldn’t really tell if/where/why his arguments failed. I found them somewhat hard to follow and just knew the conclusion would be something that is both weird and highly unlikely (for reasons other than his concrete arguments). So my impression then was “there surely is something wrong about his claims, but in this very moment I’m lacking the means to identify the weaknesses”.
I sometimes find myself in similar situations when some person tries to get me to sign something or to buy some product they’re offering. They tend to make very convincing arguments about why I should definitely do it. I often have no good arguments against that. Still, I tend to resist many of these situations because I haven’t yet heard or had a chance to find the best counter arguments.
When somebody who has thought a lot about AI safety and is very convinced of its importance talks to people to whom this whole area is new and strange, I can imagine similar defenses being present. If this is true, more/better/different arguments may not necessarily be helpful to begin with. Some things that could help:
social proof (“these well respected people and organizations think this is important”)
slightly weaker claims that people have an easier time agreeing with
maybe some meta level argument about why the unintuitive-ness is misguided (although this could probably also taken as an attack)