I believe that’s an oversimplification of what Alexander thinks but don’t want to put words in his mouth.
In any case this is one of the few decisions the 4 of us (including Cari) have always made together so we have done a lot of aligning already. My current view, which is mostly shared, is we’re currently underfunding x-risk even without longtermism math, both because FTXF went away and because I’ve updated towards shorter AI timelines in the past ~5 years. And even aside from that, we weren’t at full theoretical budget last year anyway. So that all nets out that to expected increase, not decrease.
I’d love to discover new large x-risk funders though and think recent history makes that more likely.
I believe that’s an oversimplification of what Alexander thinks but don’t want to put words in his mouth.
In any case this is one of the few decisions the 4 of us (including Cari) have always made together so we have done a lot of aligning already. My current view, which is mostly shared, is we’re currently underfunding x-risk even without longtermism math, both because FTXF went away and because I’ve updated towards shorter AI timelines in the past ~5 years. And even aside from that, we weren’t at full theoretical budget last year anyway. So that all nets out that to expected increase, not decrease.
I’d love to discover new large x-risk funders though and think recent history makes that more likely.
OK, thanks for sharing!
And yes I may well be oversimplifying Alexander’s view.