^ I am not super familiar with the history of “solve X problem and win Y reward,” but my casual familiarity/memory only can think of examples where a solution was testable and relatively easy to objectively specify.
With the alignment problem, it seems plausible that some proposals could be found to likely “work” in theory, but getting people to agree on the right metrics seems difficult and if it goes poorly we might all die.
^ I am not super familiar with the history of “solve X problem and win Y reward,” but my casual familiarity/memory only can think of examples where a solution was testable and relatively easy to objectively specify.
With the alignment problem, it seems plausible that some proposals could be found to likely “work” in theory, but getting people to agree on the right metrics seems difficult and if it goes poorly we might all die.