The value of saving philanthropic resources to deploy post-superintelligence is greater than it otherwise would be.
One way to think of this is that if there is a 10% existential risk from the superintelligence transition and we will attempt that transition, then the world is currently worth 0.90 V, where V is the expected value of the world after achieving that transition. So the future world is more valuable (in the appropriate long-term sense) and saving it is correspondingly more important. With these numbers the effect isn’t huge, but would be important enough to want to take into account.
More generally, worlds where we are almost through the time of perils are substantially more valuable than those where we aren’t. And it setback prevention becomes more important the further through you are.
For clarity, you’re using ‘important’ here in something like an importance x tractability x neglectedness factoring? So yes more important (but there might be reasons to think it’s less tractable or neglected)?
One way to think of this is that if there is a 10% existential risk from the superintelligence transition and we will attempt that transition, then the world is currently worth 0.90 V, where V is the expected value of the world after achieving that transition. So the future world is more valuable (in the appropriate long-term sense) and saving it is correspondingly more important. With these numbers the effect isn’t huge, but would be important enough to want to take into account.
More generally, worlds where we are almost through the time of perils are substantially more valuable than those where we aren’t. And it setback prevention becomes more important the further through you are.
For clarity, you’re using ‘important’ here in something like an importance x tractability x neglectedness factoring? So yes more important (but there might be reasons to think it’s less tractable or neglected)?
Yeah, I mean ‘more valuable to prevent’, before taking into account the cost and difficulty.