I’d be very surprised if AI will predominantly be considered risk-free in long-timelines worlds. The more AI will be integrated into the world, the more it will interact with and cause harmful events/processes/behaviors/etc., like take the chatbot that apparently facilitated a suicide.
And I take Snoop Doggs reaction to recent AI progress as somewhat representative of a more general attitude that will get stronger even with relatively slow and mostly benign progress
Well I got a motherf*cking AI right now that they did made for me. This n***** could talk to me. I’m like, man this thing can hold a real conversation? Like real for real? Like it’s blowing my mind because I watched movies on this as a kid years ago. When I see this sh*t I’m like what is going on? And I heard the dude, the old dude that created AI saying, “This is not safe, ’cause the AIs got their own minds, and these motherf*ckers gonna start doing their own sh*t. I’m like, are we in a f*cking movie right now, or what? The f*ck man?
I.e. it will continuously feel weird and novel and worth pondering where AI progress is going and where the risks are, and more serious people will join doing this which will again increase the credbility of those concerns.
“Considered risk free” is very different than what I discussed, which is that the broad public will see much more benefit, and have little direct experience of the types of harms that we’re concerned about. Weird and novel won’t change the public’s minds about the technology, if they benefit, and the “more serious people” in the west who drive the narrative, namely, politicians, pundits, and celebrities, still have the collective attention span of a fish. And in the mean time, RLHF will keep LLMs from going rogue, they will be beneficial, and it will seem fine to everyone not thinking deeply about the risk.
I’d be very surprised if AI will predominantly be considered risk-free in long-timelines worlds. The more AI will be integrated into the world, the more it will interact with and cause harmful events/processes/behaviors/etc., like take the chatbot that apparently facilitated a suicide.
And I take Snoop Doggs reaction to recent AI progress as somewhat representative of a more general attitude that will get stronger even with relatively slow and mostly benign progress
I.e. it will continuously feel weird and novel and worth pondering where AI progress is going and where the risks are, and more serious people will join doing this which will again increase the credbility of those concerns.
“Considered risk free” is very different than what I discussed, which is that the broad public will see much more benefit, and have little direct experience of the types of harms that we’re concerned about. Weird and novel won’t change the public’s minds about the technology, if they benefit, and the “more serious people” in the west who drive the narrative, namely, politicians, pundits, and celebrities, still have the collective attention span of a fish. And in the mean time, RLHF will keep LLMs from going rogue, they will be beneficial, and it will seem fine to everyone not thinking deeply about the risk.