FTX Foundation might get fewer submissions that change its mind than they would have gotten if only they had considered strategic updates prize worthy.
The unconditional probability of takeover isn’t necessarily the question of most strategic interest. There’s a huge difference between “50% AI disempowers humans somehow on the basis of naive principle of indifference” and
“50% MIRI-style assumptions about AI are correct”*. One might conclude from the second that the first is also true, but the first has no strategic implications (the principle of indifference ignores such things!), while the second has lots of strategic implications. For example, it suggests “ totally lock down AI development, at least until we know more” is what we need to aim for. I’m not sure exactly where you stand on whether that is needed, but given that your stated position seems to be relying substantially on outside view type reasoning, it might be a big update.
The point is: middling probabilities of strategically critical hypotheses might actually be more important updates than extreme probabilities of strategically opaque hypotheses.
My suggestion (not necessarily a full solution) is that you consider big strategic updates potentially prizeworthy. For example: do we gain a lot by delaying AGI for a few years? If we consider all the plausible paths to AGI, do we gain a lot by hastening the development of the top 1% most aligned by a few years?
I think it’s probably too hard to pre-specify exactly which strategic updates would be prizes worthy.
*By which I mean something like “more AI capability eventually yields doom, no matter what, unless it’s highly aligned”
FTX Foundation might get fewer submissions that change its mind than they would have gotten if only they had considered strategic updates prize worthy.
The unconditional probability of takeover isn’t necessarily the question of most strategic interest. There’s a huge difference between “50% AI disempowers humans somehow on the basis of naive principle of indifference” and “50% MIRI-style assumptions about AI are correct”*. One might conclude from the second that the first is also true, but the first has no strategic implications (the principle of indifference ignores such things!), while the second has lots of strategic implications. For example, it suggests “ totally lock down AI development, at least until we know more” is what we need to aim for. I’m not sure exactly where you stand on whether that is needed, but given that your stated position seems to be relying substantially on outside view type reasoning, it might be a big update.
The point is: middling probabilities of strategically critical hypotheses might actually be more important updates than extreme probabilities of strategically opaque hypotheses.
My suggestion (not necessarily a full solution) is that you consider big strategic updates potentially prizeworthy. For example: do we gain a lot by delaying AGI for a few years? If we consider all the plausible paths to AGI, do we gain a lot by hastening the development of the top 1% most aligned by a few years?
I think it’s probably too hard to pre-specify exactly which strategic updates would be prizes worthy.
*By which I mean something like “more AI capability eventually yields doom, no matter what, unless it’s highly aligned”