I am curious about the lower tractability. Do you think that changing the moral values/goals of the ASIs Humanity would create is not a tractable way to influence the value of the future? If yes, is that because we are not able to change them, or because we don’t know which moral values to input, or something else? In the second case, what about inputting the goal of figuring out which goals to pursue (“long reflection”)?
I think increasing the value of good futures is probably higher importance, but much less tractable
I am curious about the lower tractability.
Do you think that changing the moral values/goals of the ASIs Humanity would create is not a tractable way to influence the value of the future?
If yes, is that because we are not able to change them, or because we don’t know which moral values to input, or something else?
In the second case, what about inputting the goal of figuring out which goals to pursue (“long reflection”)?