Our lightcone is an enormous endowment. We get to have a lot of computation, in a universe with simple physics. What these resources are spent on matters a lot.
If we get AI right (create a CEV-aligned ASI), we get most of the utility out of these resources automatically (almost tautologically, see CEV: to the extent after considering all the arguments and reflecting we think we’re ought to value something, this is what CEV points to as an optimization target). If it takes us a long time to get AI right, we lose a literal galaxy of resources every year, but this is approximately nothing in relative terms.
If we die because of AI, we get ~0% of the possible value/max CEV.
Increasing the chance AI goes well is what’s important. Work to marginally shift the % around the maximum seems relatively unimportant compared to the chance AI goes well. Whether we die because of AI is the largest input.
(I find negative % of CEV very implausible because it almost always doesn’t make sense to spend resources on penalizing other agent’s utility if the other agent is smart enough to make it not worth it and for other, more speculative reasons.)
Our lightcone is an enormous endowment. We get to have a lot of computation, in a universe with simple physics. What these resources are spent on matters a lot.
If we get AI right (create a CEV-aligned ASI), we get most of the utility out of these resources automatically (almost tautologically, see CEV: to the extent after considering all the arguments and reflecting we think we’re ought to value something, this is what CEV points to as an optimization target). If it takes us a long time to get AI right, we lose a literal galaxy of resources every year, but this is approximately nothing in relative terms.
If we die because of AI, we get ~0% of the possible value/max CEV.
Increasing the chance AI goes well is what’s important. Work to marginally shift the % around the maximum seems relatively unimportant compared to the chance AI goes well. Whether we die because of AI is the largest input.
(I find negative % of CEV very implausible because it almost always doesn’t make sense to spend resources on penalizing other agent’s utility if the other agent is smart enough to make it not worth it and for other, more speculative reasons.)