You could have an AI with some meta-cognition, able to figure out what’s good and maximizing it in the same way EAs try to figure out what’s good and maximize it with parts of their life.
I’m not sure how that would work, but we don’t need to discuss it further, I’m no expert.
I don’t think it’s a good method and I think you should target a much more specific public but yes, I know what you mean.
What exactly do you think is “not good” about a public discussion of AI risks?
I’m not sure how that would work, but we don’t need to discuss it further, I’m no expert.
What exactly do you think is “not good” about a public discussion of AI risks?