In that case, my main disagreement is thinking that your twitter poll is evidence for your claims.
More specifically:
I claim there just aren’t really any defensible reasons to maintain this choice other than by implicitly appealing to a partiality towards humanity.
Like you claim there aren’t any defensible reasons to think that what humans will do is better than literally maximizing paper clips? This seems totally wild to me.
Like you claim there aren’t any defensible reasons to think that what humans will do is better than literally maximizing paper clips?
I’m not exactly sure what you mean by this. There were three options, and human paperclippers were only one of these options. I was mainly discussing the choice between (1) and (2) in the comment, not between (1) and (3).
Here’s my best guess at what you’re saying: it sounds like you’re repeating that you expect humans to be unusually altruistic or thoughtful compared to an unaligned alternative. But the point of my previous comment was to state my view that this bias counted as “being partial towards humanity”, since I view the bias as irrational. In light of that, what part of my comment are you objecting to?
To be clear, you can think the bias I’m talking about is actually rational; that’s fine. But I just disagree with you for pretty mundane reasons.
[Incorporating what you said in the other comment]
Also, to be clear, I agree that the question of “how much worse/better is it for AIs to get vast amounts of resources without human society intending to grant those resources to the AIs from a longtermist perspective” is underinvestigated, but I think there are pretty good reasons to systematically expect human control to be a decent amount better.
Then I think it’s worth concretely explaining what these reasons are to believe that human control will be a decent amount better in expectation. You don’t need to write this up yourself, of course. I think the EA community should write these reasons up. Because I currently view the proposition as non-obvious, and despite being a critical belief in AI risk discussions, it’s usually asserted without argument. When I’ve pressed people in the past, they typically give very weak reasons.
I don’t know how to respond to an argument whose details are omitted.
Then I think it’s worth concretely explaining what these reasons are to believe that human control will be a decent amount better in expectation. You don’t need to write this up yourself, of course.
+1, but I don’t generally think it’s worth counting on “the EA community” to do something like this. I’ve been vaguely trying to pitch Joe on doing something like this (though there are probably better uses of his time) and his recent blogs posts are touching similar topics.
Here’s my best guess at what you’re saying: it sounds like you’re repeating that you expect humans to be unusually altruistic or thoughtful compared to an unaligned alternative.
There, I’m just saying that human control is better than literal paperclip maximization.
This response still seems underspecified to me. Is the default unaligned alternative paperclip maximization in your view? I understand that Eliezer Yudkowsky has given arguments for this position, but it seems like you diverge significantly from Eliezer’s general worldview, so I’d still prefer to hear this take spelled out in more detail from your own point of view.
“a society of people who look & act like humans, but they only care about maximizing paperclips”
And then you say:
so far my followers, who are mostly EAs, are much more happy to let the humans immigrate to our world, compared to the last two options. I claim there just aren’t really any defensible reasons to maintain this choice other than by implicitly appealing to a partiality towards humanity.
So, I think more human control is better than more literal paperclip maximization, the option given in your poll.
My overall position isn’t that the AIs will certainly be paperclippers, I’m just arguing in isolation about why I think the choice given in the poll is defensible.
I have the feeling we’re talking past each other a bit. I suspect talking about this poll was kind of a distraction. I personally have the sense of trying to convey a central point, and instead of getting the point across, I feel the conversation keeps slipping into talking about how to interpret minor things I said, which I don’t see as very relevant.
I will probably take a break from replying for now, for these reasons, although I’d be happy to catch up some time and maybe have a call to discuss these questions in more depth. I definitely see you as trying a lot harder than most other EAs in trying to make progress on these questions collaboratively with me.
I’d be very happy to have some discussion on these topics with you Matthew. For what it’s worth, I really have found much of your work insightful, thought-provoking, and valuable. I think I just have some strong, core disagreements on multiple empirical/epistemological/moral levels with your latest series of posts.
That doesn’t mean I don’t want you to share your views, or that they’re not worth discussion, and I apologise if I came off as too hostile. An open invitation to have some kind of deeper discussion stands.[1]
In that case, my main disagreement is thinking that your twitter poll is evidence for your claims.
More specifically:
Like you claim there aren’t any defensible reasons to think that what humans will do is better than literally maximizing paper clips? This seems totally wild to me.
I’m not exactly sure what you mean by this. There were three options, and human paperclippers were only one of these options. I was mainly discussing the choice between (1) and (2) in the comment, not between (1) and (3).
Here’s my best guess at what you’re saying: it sounds like you’re repeating that you expect humans to be unusually altruistic or thoughtful compared to an unaligned alternative. But the point of my previous comment was to state my view that this bias counted as “being partial towards humanity”, since I view the bias as irrational. In light of that, what part of my comment are you objecting to?
To be clear, you can think the bias I’m talking about is actually rational; that’s fine. But I just disagree with you for pretty mundane reasons.
[Incorporating what you said in the other comment]
Then I think it’s worth concretely explaining what these reasons are to believe that human control will be a decent amount better in expectation. You don’t need to write this up yourself, of course. I think the EA community should write these reasons up. Because I currently view the proposition as non-obvious, and despite being a critical belief in AI risk discussions, it’s usually asserted without argument. When I’ve pressed people in the past, they typically give very weak reasons.
I don’t know how to respond to an argument whose details are omitted.
+1, but I don’t generally think it’s worth counting on “the EA community” to do something like this. I’ve been vaguely trying to pitch Joe on doing something like this (though there are probably better uses of his time) and his recent blogs posts are touching similar topics.
Also, it’s usually only the crux of longtermists which is probably one of the reasons why no one has gotten around to this.
You didn’t make this clear, so was just responding generically.
Separately, I think I feel a pretty similar intution for case (2), people literally only caring about their families seems pretty clearly worse.
There, I’m just saying that human control is better than literal paperclip maximization.
This response still seems underspecified to me. Is the default unaligned alternative paperclip maximization in your view? I understand that Eliezer Yudkowsky has given arguments for this position, but it seems like you diverge significantly from Eliezer’s general worldview, so I’d still prefer to hear this take spelled out in more detail from your own point of view.
Your poll says:
And then you say:
So, I think more human control is better than more literal paperclip maximization, the option given in your poll.
My overall position isn’t that the AIs will certainly be paperclippers, I’m just arguing in isolation about why I think the choice given in the poll is defensible.
I have the feeling we’re talking past each other a bit. I suspect talking about this poll was kind of a distraction. I personally have the sense of trying to convey a central point, and instead of getting the point across, I feel the conversation keeps slipping into talking about how to interpret minor things I said, which I don’t see as very relevant.
I will probably take a break from replying for now, for these reasons, although I’d be happy to catch up some time and maybe have a call to discuss these questions in more depth. I definitely see you as trying a lot harder than most other EAs in trying to make progress on these questions collaboratively with me.
I’d be very happy to have some discussion on these topics with you Matthew. For what it’s worth, I really have found much of your work insightful, thought-provoking, and valuable. I think I just have some strong, core disagreements on multiple empirical/epistemological/moral levels with your latest series of posts.
That doesn’t mean I don’t want you to share your views, or that they’re not worth discussion, and I apologise if I came off as too hostile. An open invitation to have some kind of deeper discussion stands.[1]
I’d like to try out the new dialogue feature on the Forum, but that’s a weak preference
Agreed, sorry about that.