“The superintelligence is misaligned with our own objectives but is benign”. You could have an AI with some meta-cognition, able to figure out what’s good and maximizing it in the same way EAs try to figure out what’s good and maximize it with parts of their life. This view mostly make sense if you give some credence to moral realism.
“My personal view on your subject is that you don’t have to work in AI to shape its future.” Yes, that’s what I wrote in the post.
“You can also do that by bringing the discussion into the public and create awareness for the dangers.” I don’t think it’s a good method and I think you should target a much more specific public but yes, I know what you mean.
You could have an AI with some meta-cognition, able to figure out what’s good and maximizing it in the same way EAs try to figure out what’s good and maximize it with parts of their life.
I’m not sure how that would work, but we don’t need to discuss it further, I’m no expert.
I don’t think it’s a good method and I think you should target a much more specific public but yes, I know what you mean.
What exactly do you think is “not good” about a public discussion of AI risks?
“The superintelligence is misaligned with our own objectives but is benign”.
You could have an AI with some meta-cognition, able to figure out what’s good and maximizing it in the same way EAs try to figure out what’s good and maximize it with parts of their life. This view mostly make sense if you give some credence to moral realism.
“My personal view on your subject is that you don’t have to work in AI to shape its future.”
Yes, that’s what I wrote in the post.
“You can also do that by bringing the discussion into the public and create awareness for the dangers.”
I don’t think it’s a good method and I think you should target a much more specific public but yes, I know what you mean.
I’m not sure how that would work, but we don’t need to discuss it further, I’m no expert.
What exactly do you think is “not good” about a public discussion of AI risks?