“FTX Foundation will not get submissions that change its mind, but it would have gotten them if only they had [broadened the scope of the prizes beyond just influencing their probabilities]”
Examples of things someone considering entering the competition would presumably consider out of scope are:
Making a case that AI misalignment is the wrong level of focus – even if AI risks are high it could be that AI risks and other risks are very heavily weighted towards specific risk factor scenarios, such as a global hot or cold war. This view is apparently expressed by Will (see here).
Making a case based on tractability – that a focus on AI risk is misguided as the ability to affect such risks are low (not to far away from the views of Yudkowsky here).
Making the case that we should not put much decisions weight on future predictions of risks – E.g. as long-run predictions of future technology as they are inevitably unreliable (see here) or E.g. as modem risk assessment best practice says that probability estimates should only play a limited role in risk assessments (my view expressed here) or other.
Making the case that some other x-risk is more pressing, more likely, more tractable, etc.
Making the case against FTX Future’s underlying philosophical and empirical assumptions – this could be claims about the epistemics of focusing on AI risks, for example relating to how we should respond to cluelessness about the future or decisions relevant views about the long run future, for example that it might be bad and not worth protecting or that there might be more risks after AI or that long-termism is false
It seems like any strong case falling into these categories should be decision relevant to FTX Future fund but all are (unless I misunderstand the post) out of scope currently.
Obviously there is a trade-off. Broadening the scope makes the project harder and less clear but increases the chance of finding something decision relevant. I don’t have a strong reason to say the scope should be broadened now, I think that depends on FTX Future Funds’s current capacity and plans for other competitions and so on.
I guess I worry that the strongest arguments are out of scope and if this competition doesn’t significantly update FTX’s views then future competitions will not be run and you will not fund the arguments you are seeking. So flagging as a potential path to failure for your pre-mortem.
“FTX Foundation will not get submissions that change its mind, but it would have gotten them if only they had [broadened the scope of the prizes beyond just influencing their probabilities]”
Examples of things someone considering entering the competition would presumably consider out of scope are:
Making a case that AI misalignment is the wrong level of focus – even if AI risks are high it could be that AI risks and other risks are very heavily weighted towards specific risk factor scenarios, such as a global hot or cold war. This view is apparently expressed by Will (see here).
Making a case based on tractability – that a focus on AI risk is misguided as the ability to affect such risks are low (not to far away from the views of Yudkowsky here).
Making the case that we should not put much decisions weight on future predictions of risks – E.g. as long-run predictions of future technology as they are inevitably unreliable (see here) or E.g. as modem risk assessment best practice says that probability estimates should only play a limited role in risk assessments (my view expressed here) or other.
Making the case that some other x-risk is more pressing, more likely, more tractable, etc.
Making the case against FTX Future’s underlying philosophical and empirical assumptions – this could be claims about the epistemics of focusing on AI risks, for example relating to how we should respond to cluelessness about the future or decisions relevant views about the long run future, for example that it might be bad and not worth protecting or that there might be more risks after AI or that long-termism is false
It seems like any strong case falling into these categories should be decision relevant to FTX Future fund but all are (unless I misunderstand the post) out of scope currently.
Obviously there is a trade-off. Broadening the scope makes the project harder and less clear but increases the chance of finding something decision relevant. I don’t have a strong reason to say the scope should be broadened now, I think that depends on FTX Future Funds’s current capacity and plans for other competitions and so on.
I guess I worry that the strongest arguments are out of scope and if this competition doesn’t significantly update FTX’s views then future competitions will not be run and you will not fund the arguments you are seeking. So flagging as a potential path to failure for your pre-mortem.
Sorry I realise scrolling down that I am making much the same point as MichaelDickens’ comment below. Hopefully added some depth or something useful.