I spoke with someone at MIRI who I expect knows more about this, and they pointed out that there aren’t good operationalizable math questions in AI safety to attack, and that the money might point them to the possibility of interesting questions, but on its own probably wouldn’t convince them to change focus.
As an intuition pump, imagine Neuman was alive today; would it be worthwhile to pay him to look into alignment? (He explicitly did contract work at extraordinary rates IIRC). I suspect that it would be worth it, despite the uncertainties. If you agree, then it does seem worthwhile to try to figure out who is the closest to being a modern Neumann and paying them to look into alignment.
That seems mostly right, and I don’t disagree that making the offer is reasonable if there are people who will take it—but it’s still different than paying someone who only does math to do AI alignment, since von Neumann was explicitly someone who bridged fields including several sciences and many areas of math.
I don’t think mathematics should be a crux. As I say below, it could be generalised to being offered to anyone a panel of top people in AGI Safety would have on their dream team (who otherwise would be unlikely to work on the problem). Or perhaps “Fields Medalists, Nobel Prize winners in Physics, other equivalent prize recipients in Computer Science, or Philosophy[?], or Economics[?]”. And we could include additional criteria, such as being able to intuit what is being alluded to here. Basically, the idea is to headhunt the very best people for the job, using extreme financial incentives. We don’t need to artificially narrow our search to one domain, but maths ability is a good heuristic as a starting point.
I spoke with someone at MIRI who I expect knows more about this, and they pointed out that there aren’t good operationalizable math questions in AI safety to attack, and that the money might point them to the possibility of interesting questions, but on its own probably wouldn’t convince them to change focus.
As an intuition pump, imagine Neuman was alive today; would it be worthwhile to pay him to look into alignment? (He explicitly did contract work at extraordinary rates IIRC). I suspect that it would be worth it, despite the uncertainties. If you agree, then it does seem worthwhile to try to figure out who is the closest to being a modern Neumann and paying them to look into alignment.
That seems mostly right, and I don’t disagree that making the offer is reasonable if there are people who will take it—but it’s still different than paying someone who only does math to do AI alignment, since von Neumann was explicitly someone who bridged fields including several sciences and many areas of math.
I don’t think mathematics should be a crux. As I say below, it could be generalised to being offered to anyone a panel of top people in AGI Safety would have on their dream team (who otherwise would be unlikely to work on the problem). Or perhaps “Fields Medalists, Nobel Prize winners in Physics, other equivalent prize recipients in Computer Science, or Philosophy[?], or Economics[?]”. And we could include additional criteria, such as being able to intuit what is being alluded to here. Basically, the idea is to headhunt the very best people for the job, using extreme financial incentives. We don’t need to artificially narrow our search to one domain, but maths ability is a good heuristic as a starting point.