see the example from yudkowsky above. As I understand it, he is the main person who has encouraged rationalists to be focused on AI. In trying to explain why AI is important to a smart person (Bryan Caplan) he appeals to the orthogonality argument which has zero bearing on whether AI alignment will be hard or worth working on.
The Orthogonality Thesis is useful to counter the common naive intuition that sufficiently intelligent AI will be benevolent by default (which a lot of smart people tend to hold prior to examining the arguments in any detail). But as Steven refers to above, it’s only one component of the argument for taking AGI x-risk seriously (and Yudkowsky lists several others in that example. He leads with orthogonality to prime the pump; to emphasise that common human intuitions aren’t useful here.).
Hi Greg, I don’t think anyone would ever have held that it is logically impossible for AGI not to be aligned. That is clearly a crazy view. All that orthogonality argument proves is that it is logically possible for AGI not to be aligned, which is almost trivial.
Right, but I think “by default” is important here. Many more people seem to think alignment will happen by default (or at least something along the lines of us being able to muddle through, reasoning with the AI and convincing it to be good, or easily shutting it down if it’s not, or something), rather than the opposite.
All the argument shows is that it is logically possible for AGI not to be aligned. Since Bryan Caplan is a sane human being, it’s improbable that he would ever not have accepted that claim. So, it’s unclear why Yudkowsky would have presented it to him as an important argument about AGI alignment.
“1′. AIs have a non-trivial chance of being dangerously un-nice.
I do find this plausible, though only because many governments will create un-nice AIs on purpose.”
Which to me sounds like he doesn’t really get it. Like he’s ignoring “by default does things we regard as harmful” (which he kind of agrees to above; he agrees with “2. Instrumental convergence”). You’re right in that the Orthogonality Thesis doesn’t carry the argument on it’s own, but in conjunction with Instrumental Convergence (and to be more complete, mesa-optimisation), I think it does.
It’s a shame that Caplan doesn’t reply to Yudkowsky’s follow up:
Bryan, would you say that you’re not worried about 1′ because:
1’a: You don’t think a paperclip maximizer is un-nice enough to be dangerous, even if it’s smarter than us. 1’b: You don’t think a paperclip maximizer of around human intelligence is un-nice enough to be dangerous, and you don’t foresee paperclip maximizers becoming much smarter than humans. 1’c: You don’t think that AGIs as un-nice as a paperclip maximizer are probable, unless those durned governments create AGIs that un-nice on purpose.
‘By default’ seems like another murky term. The orthogonality thesis asserts (something like) that it’s not something you should place a bet at arbitrarily long odds on, but maybe it’s nonetheless very likely to work out, because per Drexler, we just don’t code AI as an unbounded optimiser, which you might still call ‘by default’.
At the moment I have no idea what to think, tbh. But I lean towards focusing on GCRs that definitely need direct action in the short term, such as climate change, over ones that might be more destructive but where the relevant direct action is likely to be taken much further off.
So by ‘by default’ I mean without any concerted effort to address existential risk from AI, or just following “business as usual” with AI development. Yes, Drexler’s CAIS would be an example of this. But I’d argue that “just don’t code AI as an unbounded optimiser” is very likely to fail due to mesa-optimisers and convergent instrumental goals emerging in sufficiently powerful systems.
Interesting you mention climate change, as I actually went from focusing on that pre-EA to now thinking that AGI is a much more severe, and more immediate, threat! (Although I also remain interested in other more “mundane” GCRs.)
see the example from yudkowsky above. As I understand it, he is the main person who has encouraged rationalists to be focused on AI. In trying to explain why AI is important to a smart person (Bryan Caplan) he appeals to the orthogonality argument which has zero bearing on whether AI alignment will be hard or worth working on.
The Orthogonality Thesis is useful to counter the common naive intuition that sufficiently intelligent AI will be benevolent by default (which a lot of smart people tend to hold prior to examining the arguments in any detail). But as Steven refers to above, it’s only one component of the argument for taking AGI x-risk seriously (and Yudkowsky lists several others in that example. He leads with orthogonality to prime the pump; to emphasise that common human intuitions aren’t useful here.).
Hi Greg, I don’t think anyone would ever have held that it is logically impossible for AGI not to be aligned. That is clearly a crazy view. All that orthogonality argument proves is that it is logically possible for AGI not to be aligned, which is almost trivial.
Right, but I think “by default” is important here. Many more people seem to think alignment will happen by default (or at least something along the lines of us being able to muddle through, reasoning with the AI and convincing it to be good, or easily shutting it down if it’s not, or something), rather than the opposite.
All the argument shows is that it is logically possible for AGI not to be aligned. Since Bryan Caplan is a sane human being, it’s improbable that he would ever not have accepted that claim. So, it’s unclear why Yudkowsky would have presented it to him as an important argument about AGI alignment.
So the last Caplan says there is:
Which to me sounds like he doesn’t really get it. Like he’s ignoring “by default does things we regard as harmful” (which he kind of agrees to above; he agrees with “2. Instrumental convergence”). You’re right in that the Orthogonality Thesis doesn’t carry the argument on it’s own, but in conjunction with Instrumental Convergence (and to be more complete, mesa-optimisation), I think it does.
It’s a shame that Caplan doesn’t reply to Yudkowsky’s follow up:
it’s tricky to see what happened in that debate because i have twitter and that blog blocked on weekdays!
I just posted a reply to a similar comment about orthogonality + IC here.
‘By default’ seems like another murky term. The orthogonality thesis asserts (something like) that it’s not something you should place a bet at arbitrarily long odds on, but maybe it’s nonetheless very likely to work out, because per Drexler, we just don’t code AI as an unbounded optimiser, which you might still call ‘by default’.
At the moment I have no idea what to think, tbh. But I lean towards focusing on GCRs that definitely need direct action in the short term, such as climate change, over ones that might be more destructive but where the relevant direct action is likely to be taken much further off.
So by ‘by default’ I mean without any concerted effort to address existential risk from AI, or just following “business as usual” with AI development. Yes, Drexler’s CAIS would be an example of this. But I’d argue that “just don’t code AI as an unbounded optimiser” is very likely to fail due to mesa-optimisers and convergent instrumental goals emerging in sufficiently powerful systems.
Interesting you mention climate change, as I actually went from focusing on that pre-EA to now thinking that AGI is a much more severe, and more immediate, threat! (Although I also remain interested in other more “mundane” GCRs.)