On AI quietism. Distinguish four things:
Not believing in AGI takeover.
Not believing that AGI takeover is near. (Ng)
Believing in AGI takeover, but thinking it’ll be fine for humans. (Schmidhuber)
Believing that AGI will extinguish humanity, but this is fine.
because the new thing is superior (maybe by definition, if it outcompetes us).
because scientific discovery is the main thing
(4) is not a rational lack of concern about an uncertain or far-off risk: it’s lack of caring, conditional on the risk being real.
Can there really be anyone in category (4) ?
Sutton: we could choose option (b) [acquiescence] and not have to worry about all that. What might happen then? We may still be of some value and live on. Or we may be useless and in the way, and go extinct. One big fear is that strong AIs will escape our control; this is likely, but not to be feared… ordinary humans will eventually be of little importance, perhaps extinct, if that is as it should be.
Hinton: “the truth is that the prospect of discovery is too sweet.” He smiled awkwardly, the word hanging in the air—an echo of Oppenheimer, who famously said of the bomb, “When you see something that is technically sweet, you go ahead and do it, and you argue about what to do about it only after you have had your technical success.” As the scientists retreated to tables set up for refreshments, I asked Hinton if he believed an A.I. could be controlled. “That is like asking if a child can control his parents,” he said. “It can happen with a baby and a mother—there is biological hardwiring—but there is not a good track record of less intelligent things controlling things of greater intelligence.”
I expect this cope to become more common over the next few years.
I like all of your suggested actions. Two thoughts:
1) EA is a both a set of strong claims about causes + an intellectual framework which can be applied to any cause. One explanation for what’s happening is that we grew a lot recently, and new people find the precooked causes easier to engage with (and the all-important status gradient of the community points firmly towards them). It takes a lot of experience and boldness to investigate and intervene on a new cause.
I suspect you won’t agree with this framing but: one way of viewing the play between these two things is a classic explore/exploit tradeoff.[1] On this view, exploration (new causes, new different people) is for discovering new causes.[2] Once you find something huge, you stop searching until it is fixed.
IMO our search actually did find something so important, neglected, and maybe tractable (AI) that it’s right to somewhat de-emphasise cause exploration until that situation begins to look better. We found a combination gold mine / natural fission reactor. This cause is even pluralistic, since you can’t e.g. admire art if there’s no world.
2) But anyway I agree that we have narrowed too much. See this post which explains the significance of cause diversity on a maximising view, or my series of obituaries about people who did great things outside the community.
I suspect this because you say that we shouldn’t have a “singular best way to do good”, and the bandit framing usually assumes one objective.
Or new perspectives on causes / new ideas for causes / hidden costs of interventions / etc