I never said it did, I said it means I can’t argue that you should intrinsically value anything. What arguments could I give to a paper clipper to stop its paper clipping ways.
Still, this is not right. There are plenty of arguments you can give to a paper clipper, such as Kant’s argument for the categorical imperative, or Sidgwick’s argument for utilitarianism, or many others.
1) They allow you to break ties. If there are two situations with equal well being then having a second intrinsic value would give you a way of picking between the two.
I don’t see why we should want to break ties, since it presupposes that our preferred metric judges the different options to be equal. Moreover, your pluralist metric will end up with ties too.
2) It might be computationally complicated or informationally complicated to calculate your intrinsic value.
Sure, but that’s not an argument for having pluralism over intrinsic values.
I am thinking about unknown internal threats. One possibility that I alluded to is if it modifies itself to improve itself, but it does that on shakey premise and destroys itself. Another possibility is that parts of may degrade and/or get damaged and it gets the equivalent of cancers.
If a single ASI is unstable and liable to collapse, then basically every view would count that as a problem, because it implies destruction of civilization and so on. It doesn’t have anything to do with autonomy in particular.
I was assuming a single non-morally perfect AI as that seems like the most likely outcome of the drive to a single AI to me.
AI being non-morally perfect doesn’t imply that it would be racist, oppressive, or generally as bad or worse as existing or alternative institutions.
If they are not free of surveillance, then they have not left the society.
Why should we care about someone’s desire to have a supercomputer which doesn’t get checked for the presence of dangerous AGI...?
I think we want different things from our moral systems. I think my morality/value is complicated and best represented by different heuristics that guide how I think or what I aim for. It would take more time than I am willing to invest at the moment to try and explain my views fully.
Why should we care about someone’s desire to have a supercomputer which doesn’t get checked for the presence of dangerous AGI...?
Why should we care about someone’s desire to have their thoughts not checked for the presence of malicious genius? They may use their thinking to create something equally dangerous that we have not yet thought of.
Why care about freedom at all?
If I upload and then want to take a spaceship somewhere hard to monitor, will I be allowed to take a super computer, if I need it to perform science?
What is in my pocket was once considered a dangerous super computer. The majority of the world is now trusted with it, or at least the benefits of having them out weigh the potential costs.
Why should we care about someone’s desire to have their thoughts not checked for the presence of malicious genius? They may use their thinking to create something equally dangerous that we have not yet thought of.
If you can do that, sure. Some people might have a problem with it though, because you’re probing their personal thoughts.
Why care about freedom at all?
Because people like being free and it keeps society fresh with new ideas.
If I upload and then want to take a spaceship somewhere hard to monitor, will I be allowed to take a super computer, if I need it to perform science?
Sure. Just don’t use it to build a super-AGI that will take over the world.
What is in my pocket was once considered a dangerous super computer. The majority of the world is now trusted with it, or at least the benefits of having them out weigh the potential costs.
That’s because you can’t use what is in your pocket to take over the world. Remember that you started this conversation by asking “How would that be allowed if those people might create a competitor AI?” So if you assume that future people can’t create a competitor AI, for instance because their computers have no more comparative power to help take over the world than our current computers do, then of course those people can be allowed to do whatever they want and your original question doesn’t make sense.
Because people like being free and it keeps society fresh with new ideas.
If I upload and then want to take a spaceship somewhere hard to monitor, will I be allowed to take a super computer, if I need it to perform science?
Sure. Just don’t use it to build a super-AGI that will take over the world.
What if there is a very small risk that I will do so, lets say 0.0000001%? Using something like the arguments for the cosmic inheritance, this could be seen as likely causing a certain amount of astronomical waste. Judged purely on whether people are alive, this seems like a no go. But if you take into consideration that the society that stops this kind of activity would be less free, and less free for all people throughout history, this is a negative. I am trying to get this negative included in our moral calculus, else I fear we will optimize it away.
Still, this is not right. There are plenty of arguments you can give to a paper clipper, such as Kant’s argument for the categorical imperative, or Sidgwick’s argument for utilitarianism, or many others.
I don’t see why we should want to break ties, since it presupposes that our preferred metric judges the different options to be equal. Moreover, your pluralist metric will end up with ties too.
Sure, but that’s not an argument for having pluralism over intrinsic values.
If a single ASI is unstable and liable to collapse, then basically every view would count that as a problem, because it implies destruction of civilization and so on. It doesn’t have anything to do with autonomy in particular.
AI being non-morally perfect doesn’t imply that it would be racist, oppressive, or generally as bad or worse as existing or alternative institutions.
Why should we care about someone’s desire to have a supercomputer which doesn’t get checked for the presence of dangerous AGI...?
I think we want different things from our moral systems. I think my morality/value is complicated and best represented by different heuristics that guide how I think or what I aim for. It would take more time than I am willing to invest at the moment to try and explain my views fully.
Why should we care about someone’s desire to have their thoughts not checked for the presence of malicious genius? They may use their thinking to create something equally dangerous that we have not yet thought of.
Why care about freedom at all?
If I upload and then want to take a spaceship somewhere hard to monitor, will I be allowed to take a super computer, if I need it to perform science?
What is in my pocket was once considered a dangerous super computer. The majority of the world is now trusted with it, or at least the benefits of having them out weigh the potential costs.
If you can do that, sure. Some people might have a problem with it though, because you’re probing their personal thoughts.
Because people like being free and it keeps society fresh with new ideas.
Sure. Just don’t use it to build a super-AGI that will take over the world.
That’s because you can’t use what is in your pocket to take over the world. Remember that you started this conversation by asking “How would that be allowed if those people might create a competitor AI?” So if you assume that future people can’t create a competitor AI, for instance because their computers have no more comparative power to help take over the world than our current computers do, then of course those people can be allowed to do whatever they want and your original question doesn’t make sense.
What if there is a very small risk that I will do so, lets say 0.0000001%? Using something like the arguments for the cosmic inheritance, this could be seen as likely causing a certain amount of astronomical waste. Judged purely on whether people are alive, this seems like a no go. But if you take into consideration that the society that stops this kind of activity would be less free, and less free for all people throughout history, this is a negative. I am trying to get this negative included in our moral calculus, else I fear we will optimize it away.