There is also the feedback loop involving the Future Fund itself. As Michael Dickens points out here:
the existence of the FTX Future Fund decreases p(misalignment x-risk|AGI), and this very prize also decreases it, and this particular question has a negative feedback loop (where a high-probability answer to p(misalignment x-risk|AGI) decreases the probability, and vice versa).
I think it’s much easier to argue that p(misalignment x-risk|AGI) >35% (or 75%) as things stand.
I’m thinking more along the lines of how things are with the current level of progress on AI Alignment and AI Governance, or assuming that the needle doesn’t move appreciably on these. In the limit of zero needle movement, this would be equivalent to if AGI was invented tomorrow.
No, they are unconditional.
There is also the feedback loop involving the Future Fund itself. As Michael Dickens points out here:
I think it’s much easier to argue that p(misalignment x-risk|AGI) >35% (or 75%) as things stand.
What does “as things stand” mean? If we invented AGI tomorrow? That doesn’t seem like a useful prediction.
I’m thinking more along the lines of how things are with the current level of progress on AI Alignment and AI Governance, or assuming that the needle doesn’t move appreciably on these. In the limit of zero needle movement, this would be equivalent to if AGI was invented tomorrow.