I guess I felt that a lot of the post was arguing under a frame of utilitarianism which is generally fair I think. When it comes to “not leaving a footprint on the future” what I’m referring to is epistemic humility about the correct moral theories. I’m quite uncertain myself about what is correct when it comes to morality with extra weight on utilitarianism. From this, we should be worried about being wrong and therefore try our best to not lock in whatever we’re currently thinking. (The classic example being if we did this 200 years ago we might still have slaves in the future)
I’m a believer that virtue ethics and deontology are imperfect information approximations of utilitarianism. Like Kant’s categorical imperative is a way of looking at the long-term future and asking, how do we optimise society to be the best that it can be?
I guess a core crux here for me is that it seems like you’re arguing a bit for naive utilitarianism here. I actually don’t really believe the idea that we will have the AGI follow the VNM-axioms that is being fully rational. I think it will be an internal dynamic system that are weighing for different things that it wants and that it won’t fully maximise utility because it won’t be internally aligned. Therefore we need to get it right or we’re going to have weird and idiosyncratic values that are not optimal for the long-term future of the world.
I hope that makes sense, I liked your post in general.
I guess I felt that a lot of the post was arguing under a frame of utilitarianism which is generally fair I think. When it comes to “not leaving a footprint on the future” what I’m referring to is epistemic humility about the correct moral theories. I’m quite uncertain myself about what is correct when it comes to morality with extra weight on utilitarianism. From this, we should be worried about being wrong and therefore try our best to not lock in whatever we’re currently thinking. (The classic example being if we did this 200 years ago we might still have slaves in the future)
I’m a believer that virtue ethics and deontology are imperfect information approximations of utilitarianism. Like Kant’s categorical imperative is a way of looking at the long-term future and asking, how do we optimise society to be the best that it can be?
I guess a core crux here for me is that it seems like you’re arguing a bit for naive utilitarianism here. I actually don’t really believe the idea that we will have the AGI follow the VNM-axioms that is being fully rational. I think it will be an internal dynamic system that are weighing for different things that it wants and that it won’t fully maximise utility because it won’t be internally aligned. Therefore we need to get it right or we’re going to have weird and idiosyncratic values that are not optimal for the long-term future of the world.
I hope that makes sense, I liked your post in general.