I’ve since gotten a bit more context, but I remember feeling super confused about these things when first wondering how much to focus on this stuff:
Before we get to “what’s the best argument for this,” just what are the arguments for (and against) (strongly) prioritizing AI stuff (of the kind that people in the community are currently working on)?
People keep saying heuristic-y things about self-improving AI and paperclips—just what arguments are they making? (What are the end-to-end / logically thorough / precise arguments here?)
A bunch of people seem to argue for “AI stuff is important” but believe / act as if “AI stuff is overwhelmingly important”—what are arguments for the latter view?
Even if AI is overwhelmingly important, why does this imply we should be focusing on the things the AI safety/governance fields are currently doing?
Some of the arguments for prioritizing AI seem to route through “(emerging) technologies are very important”—what about other emerging technologies?
If there’s such a lack of strategic clarity / robustly good things to do in AI governance, why not focus on broadly improving institutions?
Why should we expect advanced AI anytime soon?
What are AI governance people up to? (I.e. what are they working on / what’s their theory of change?)
What has the AI safety field accomplished (in terms of research, not just field-building)? (Is there evidence that AI safety is tractable right now?)
A lot of the additional things I found suspect were outside-view-y considerations / “common sense” heuristics—to put it in a very one-sided way, it was something like, “So you’re telling me some internet forum is roughly the first and only community to identify the most important problem in history, despite this community’s vibes of overconfidence and hero-worship and non-legible qualifications and getting nerd-sniped, and this supposedly critical problem just happens to be some flashy thing that lines up with their academic interests and sounds crazy and isn’t a worry for the most clearly relevant experts?”)
(If people are curious, the resources I found most helpful on these were: this, this, and this for 1.1, the former things + longtermism arguments + The Precipice on non-AI existential risks for 1.2, 1.1 stuff & stuff in this syllabus for 1.3 and 3, ch. 2 of Superintelligence for 1.4, this for 1.6, the earlier stuff (1.1 and 3) for 4, and various more scattered things for 1.5 and 2.)
I’ve since gotten a bit more context, but I remember feeling super confused about these things when first wondering how much to focus on this stuff:
Before we get to “what’s the best argument for this,” just what are the arguments for (and against) (strongly) prioritizing AI stuff (of the kind that people in the community are currently working on)?
People keep saying heuristic-y things about self-improving AI and paperclips—just what arguments are they making? (What are the end-to-end / logically thorough / precise arguments here?)
A bunch of people seem to argue for “AI stuff is important” but believe / act as if “AI stuff is overwhelmingly important”—what are arguments for the latter view?
Even if AI is overwhelmingly important, why does this imply we should be focusing on the things the AI safety/governance fields are currently doing?
Some of the arguments for prioritizing AI seem to route through “(emerging) technologies are very important”—what about other emerging technologies?
If there’s such a lack of strategic clarity / robustly good things to do in AI governance, why not focus on broadly improving institutions?
Why should we expect advanced AI anytime soon?
What are AI governance people up to? (I.e. what are they working on / what’s their theory of change?)
What has the AI safety field accomplished (in terms of research, not just field-building)? (Is there evidence that AI safety is tractable right now?)
A lot of the additional things I found suspect were outside-view-y considerations / “common sense” heuristics—to put it in a very one-sided way, it was something like, “So you’re telling me some internet forum is roughly the first and only community to identify the most important problem in history, despite this community’s vibes of overconfidence and hero-worship and non-legible qualifications and getting nerd-sniped, and this supposedly critical problem just happens to be some flashy thing that lines up with their academic interests and sounds crazy and isn’t a worry for the most clearly relevant experts?”)
(If people are curious, the resources I found most helpful on these were: this, this, and this for 1.1, the former things + longtermism arguments + The Precipice on non-AI existential risks for 1.2, 1.1 stuff & stuff in this syllabus for 1.3 and 3, ch. 2 of Superintelligence for 1.4, this for 1.6, the earlier stuff (1.1 and 3) for 4, and various more scattered things for 1.5 and 2.)