Here’s my best guess at what you’re saying: it sounds like you’re repeating that you expect humans to be unusually altruistic or thoughtful compared to an unaligned alternative.
There, I’m just saying that human control is better than literal paperclip maximization.
This response still seems underspecified to me. Is the default unaligned alternative paperclip maximization in your view? I understand that Eliezer Yudkowsky has given arguments for this position, but it seems like you diverge significantly from Eliezer’s general worldview, so I’d still prefer to hear this take spelled out in more detail from your own point of view.
“a society of people who look & act like humans, but they only care about maximizing paperclips”
And then you say:
so far my followers, who are mostly EAs, are much more happy to let the humans immigrate to our world, compared to the last two options. I claim there just aren’t really any defensible reasons to maintain this choice other than by implicitly appealing to a partiality towards humanity.
So, I think more human control is better than more literal paperclip maximization, the option given in your poll.
My overall position isn’t that the AIs will certainly be paperclippers, I’m just arguing in isolation about why I think the choice given in the poll is defensible.
I have the feeling we’re talking past each other a bit. I suspect talking about this poll was kind of a distraction. I personally have the sense of trying to convey a central point, and instead of getting the point across, I feel the conversation keeps slipping into talking about how to interpret minor things I said, which I don’t see as very relevant.
I will probably take a break from replying for now, for these reasons, although I’d be happy to catch up some time and maybe have a call to discuss these questions in more depth. I definitely see you as trying a lot harder than most other EAs in trying to make progress on these questions collaboratively with me.
I’d be very happy to have some discussion on these topics with you Matthew. For what it’s worth, I really have found much of your work insightful, thought-provoking, and valuable. I think I just have some strong, core disagreements on multiple empirical/epistemological/moral levels with your latest series of posts.
That doesn’t mean I don’t want you to share your views, or that they’re not worth discussion, and I apologise if I came off as too hostile. An open invitation to have some kind of deeper discussion stands.[1]
There, I’m just saying that human control is better than literal paperclip maximization.
This response still seems underspecified to me. Is the default unaligned alternative paperclip maximization in your view? I understand that Eliezer Yudkowsky has given arguments for this position, but it seems like you diverge significantly from Eliezer’s general worldview, so I’d still prefer to hear this take spelled out in more detail from your own point of view.
Your poll says:
And then you say:
So, I think more human control is better than more literal paperclip maximization, the option given in your poll.
My overall position isn’t that the AIs will certainly be paperclippers, I’m just arguing in isolation about why I think the choice given in the poll is defensible.
I have the feeling we’re talking past each other a bit. I suspect talking about this poll was kind of a distraction. I personally have the sense of trying to convey a central point, and instead of getting the point across, I feel the conversation keeps slipping into talking about how to interpret minor things I said, which I don’t see as very relevant.
I will probably take a break from replying for now, for these reasons, although I’d be happy to catch up some time and maybe have a call to discuss these questions in more depth. I definitely see you as trying a lot harder than most other EAs in trying to make progress on these questions collaboratively with me.
I’d be very happy to have some discussion on these topics with you Matthew. For what it’s worth, I really have found much of your work insightful, thought-provoking, and valuable. I think I just have some strong, core disagreements on multiple empirical/epistemological/moral levels with your latest series of posts.
That doesn’t mean I don’t want you to share your views, or that they’re not worth discussion, and I apologise if I came off as too hostile. An open invitation to have some kind of deeper discussion stands.[1]
I’d like to try out the new dialogue feature on the Forum, but that’s a weak preference
Agreed, sorry about that.