Why do you think it’s any more important than the FTX Fund’s other obligations? If there’s to be a settlement matching partial assets to all of the fund’s liabilities, it should done in an open and fair way. Maybe the assets are 0, in which case that becomes moot. My own view is that there are many other projects of equal or greater merit with funding commitments from the FTX Fund.
That’s reasonable. I guess from my perspective, I think the top EA grantmakers need persuading that p(doom|AGI) is significantly greater than 35%. If OpenPhil already think this, then that’s great, but if they don’t (and their probabilites are similar to the Future Fund’s), then the Worldview prize is very important. Even if your probabilities are the same, or much lower, it’s still very high Value of Information imo.
In the survey I did last year, four Open Phil staff respectively gave probability 0.5, 0.5, 0.35, and 0.06 to “the overall value of the future will be drastically less than it could have been, as a result of AI systems not doing/optimizing what the people deploying them wanted/intended”.
That’s just four people, and isn’t necessarily representative of the rest of longtermist Open Phil, but it at least shows that “higher than 35%” isn’t an unrepresented view there.
Ajeya Cotra’s median guess is that AGI is 18 years away; the last time I talked to a MIRI person, their median guess was 14 years. So the Cotra and MIRI camps seem super close to me in timelines (though you can find plenty of individuals whose median year is not in the 2036-2040 range).
If you look at (e.g.) animal welfare EAs vs. AI risk EAs, I expect a much larger gap in timeline beliefs.
One could also argue for prioritizing funding for work that has already been done over work that has been approved but not yet done. If someone was going to receive a grant to do certain work and has it been pulled, that is unfair and a loss to them . . . but it’s not as bad (or as damaging to the community / future incentives) as denying people payment for work they have already done.
How this logic translates to a prize program is murky. But unless you believe that the prize’s existence did not cause people to work more (i.e., that the prize program was completely ineffective), its cancellation would mean people are not going to be paid for work already performed.
Of course, it might be possible to honor the commitment made for that work in some fashion that doesn’t involve awarding full prizes.
Why do you think it’s any more important than the FTX Fund’s other obligations? If there’s to be a settlement matching partial assets to all of the fund’s liabilities, it should done in an open and fair way. Maybe the assets are 0, in which case that becomes moot. My own view is that there are many other projects of equal or greater merit with funding commitments from the FTX Fund.
That’s reasonable. I guess from my perspective, I think the top EA grantmakers need persuading that p(doom|AGI) is significantly greater than 35%. If OpenPhil already think this, then that’s great, but if they don’t (and their probabilites are similar to the Future Fund’s), then the Worldview prize is very important. Even if your probabilities are the same, or much lower, it’s still very high Value of Information imo.
In the survey I did last year, four Open Phil staff respectively gave probability 0.5, 0.5, 0.35, and 0.06 to “the overall value of the future will be drastically less than it could have been, as a result of AI systems not doing/optimizing what the people deploying them wanted/intended”.
That’s just four people, and isn’t necessarily representative of the rest of longtermist Open Phil, but it at least shows that “higher than 35%” isn’t an unrepresented view there.
Interesting, thanks. What about short timelines? (p(AGI by 2043) in Future Fund Worldview Prize terms)
Ajeya Cotra’s median guess is that AGI is 18 years away; the last time I talked to a MIRI person, their median guess was 14 years. So the Cotra and MIRI camps seem super close to me in timelines (though you can find plenty of individuals whose median year is not in the 2036-2040 range).
If you look at (e.g.) animal welfare EAs vs. AI risk EAs, I expect a much larger gap in timeline beliefs.
One could also argue for prioritizing funding for work that has already been done over work that has been approved but not yet done. If someone was going to receive a grant to do certain work and has it been pulled, that is unfair and a loss to them . . . but it’s not as bad (or as damaging to the community / future incentives) as denying people payment for work they have already done.
How this logic translates to a prize program is murky. But unless you believe that the prize’s existence did not cause people to work more (i.e., that the prize program was completely ineffective), its cancellation would mean people are not going to be paid for work already performed.
Of course, it might be possible to honor the commitment made for that work in some fashion that doesn’t involve awarding full prizes.