‘By default’ seems like another murky term. The orthogonality thesis asserts (something like) that it’s not something you should place a bet at arbitrarily long odds on, but maybe it’s nonetheless very likely to work out, because per Drexler, we just don’t code AI as an unbounded optimiser, which you might still call ‘by default’.
At the moment I have no idea what to think, tbh. But I lean towards focusing on GCRs that definitely need direct action in the short term, such as climate change, over ones that might be more destructive but where the relevant direct action is likely to be taken much further off.
So by ‘by default’ I mean without any concerted effort to address existential risk from AI, or just following “business as usual” with AI development. Yes, Drexler’s CAIS would be an example of this. But I’d argue that “just don’t code AI as an unbounded optimiser” is very likely to fail due to mesa-optimisers and convergent instrumental goals emerging in sufficiently powerful systems.
Interesting you mention climate change, as I actually went from focusing on that pre-EA to now thinking that AGI is a much more severe, and more immediate, threat! (Although I also remain interested in other more “mundane” GCRs.)
‘By default’ seems like another murky term. The orthogonality thesis asserts (something like) that it’s not something you should place a bet at arbitrarily long odds on, but maybe it’s nonetheless very likely to work out, because per Drexler, we just don’t code AI as an unbounded optimiser, which you might still call ‘by default’.
At the moment I have no idea what to think, tbh. But I lean towards focusing on GCRs that definitely need direct action in the short term, such as climate change, over ones that might be more destructive but where the relevant direct action is likely to be taken much further off.
So by ‘by default’ I mean without any concerted effort to address existential risk from AI, or just following “business as usual” with AI development. Yes, Drexler’s CAIS would be an example of this. But I’d argue that “just don’t code AI as an unbounded optimiser” is very likely to fail due to mesa-optimisers and convergent instrumental goals emerging in sufficiently powerful systems.
Interesting you mention climate change, as I actually went from focusing on that pre-EA to now thinking that AGI is a much more severe, and more immediate, threat! (Although I also remain interested in other more “mundane” GCRs.)