What does it mean for something to be an optimizer?
Expected utility maximization seems to fully cover this. More general models aren’t particularly useful to saving the world.
For what it’s worth, I have significant disagreements with basically all of your short replies to these basic questions, and I’ve been heavily engaged in AI alignment discussions for several years. So, I strongly disagree with your claim that these questions are “either already solved or there’s a good reason why thinking about them is not useful to the solution”, at least in the way you seem to think they have been solved.
I feel like they’re at least solved-enough that they’re not particularly what should be getting focused on. I predict that in worlds where we survive, spending time on those question doesn’t end up having cashed out to much value.
For what it’s worth, I have significant disagreements with basically all of your short replies to these basic questions, and I’ve been heavily engaged in AI alignment discussions for several years. So, I strongly disagree with your claim that these questions are “either already solved or there’s a good reason why thinking about them is not useful to the solution”, at least in the way you seem to think they have been solved.
I feel like they’re at least solved-enough that they’re not particularly what should be getting focused on. I predict that in worlds where we survive, spending time on those question doesn’t end up having cashed out to much value.