Believing in AGI takeover, but thinking it’ll be fine for humans. (Schmidhuber)
Believing that AGI will extinguish humanity, but this is fine.
because the new thing is superior (maybe by definition, if it outcompetes us).
because scientific discovery is the main thing
(4) is not a rational lack of concern about an uncertain or far-off risk: it’s lack of caring, conditional on the risk being real.
Can there really be anyone in category (4) ?
Sutton:we could choose option (b) [acquiescence] and not have to worry about all that. What might happen then? We may still be of some value and live on. Or we may be useless and in the way, and go extinct. One big fear is that strong AIs will escape our control; this is likely, but not to be feared… ordinary humans will eventually be of little importance, perhaps extinct, if that is as it should be.
Hinton: “the truth is that the prospect of discovery is too sweet.” He smiled awkwardly, the word hanging in the air—an echo of Oppenheimer, who famously said of the bomb, “When you see something that is technically sweet, you go ahead and do it, and you argue about what to do about it only after you have had your technical success.” As the scientists retreated to tables set up for refreshments, I asked Hinton if he believed an A.I. could be controlled. “That is like asking if a child can control his parents,” he said. “It can happen with a baby and a mother—there is biological hardwiring—but there is not a good track record of less intelligent things controlling things of greater intelligence.”
I expect this cope to become more common over the next few years.
(4) was definitely the story with Ben Goertzen and his “Cosmism”. I expect some “a/acc” libertarian types will also go for it. But it is and will stay pretty fringe imo.
On AI quietism. Distinguish four things:
Not believing in AGI takeover.
Not believing that AGI takeover is near. (Ng)
Believing in AGI takeover, but thinking it’ll be fine for humans. (Schmidhuber)
Believing that AGI will extinguish humanity, but this is fine.
because the new thing is superior (maybe by definition, if it outcompetes us).
because scientific discovery is the main thing
(4) is not a rational lack of concern about an uncertain or far-off risk: it’s lack of caring, conditional on the risk being real.
Can there really be anyone in category (4) ?
Sutton: we could choose option (b) [acquiescence] and not have to worry about all that. What might happen then? We may still be of some value and live on. Or we may be useless and in the way, and go extinct. One big fear is that strong AIs will escape our control; this is likely, but not to be feared… ordinary humans will eventually be of little importance, perhaps extinct, if that is as it should be.
Hinton: “the truth is that the prospect of discovery is too sweet.” He smiled awkwardly, the word hanging in the air—an echo of Oppenheimer, who famously said of the bomb, “When you see something that is technically sweet, you go ahead and do it, and you argue about what to do about it only after you have had your technical success.” As the scientists retreated to tables set up for refreshments, I asked Hinton if he believed an A.I. could be controlled. “That is like asking if a child can control his parents,” he said. “It can happen with a baby and a mother—there is biological hardwiring—but there is not a good track record of less intelligent things controlling things of greater intelligence.”
I expect this cope to become more common over the next few years.
(4) was definitely the story with Ben Goertzen and his “Cosmism”. I expect some “a/acc” libertarian types will also go for it. But it is and will stay pretty fringe imo.