You say that there hasn’t been much literature arguing for Sudden Emergence (the claim that AI progress will look more like the brain-in-a-box scenario than the gradual-distributed-progress scenario). I am interested in writing some things on the topic myself, but currently think it isn’t decision-relevant enough to be worth prioritizing. Can you say more about the decision-relevance of this debate?
Toy example: Suppose I write something that triples everyone’s credence in Sudden Emergence. How does that change what people do, in a way that makes the world better (or worse, depending on whether Sudden Emergence is true or not!)
I would be really interested in you writing on that!
It’s a bit hard to say what the specific impact would be, but beliefs about the magnitude of AI risk of course play at least an implicit role in lots of career/research-focus/donation decisions within the EA community; these beliefs also affect the extent to which broad EA orgs focus on AI risk relative to other cause areas. And I think that people’s beliefs about the Sudden Emergence hypothesis at least should have a large impact in their level of doominess about AI risk; I regard it as one of the biggest cruxes. So I’d at least be hopeful that, if everyone’s credences in Sudden Emergence changed by a factor of three, this had some sort of impact on the portion of EA attention devoted to AI risk. I think that credences in the Sudden Emergence hypothesis should also have an impact on the kinds of risks/scenarios that people within the AI governance and safety communities focus on.
I don’t, though, have a much more concrete picture of the influence pathway.
OK, thanks. Not sure I can pull it off, that was just a toy example. Probably even my best arguments would have a smaller impact than a factor of three, at least when averaged across the whole community.
I agree with your explanation of the ways this would improve things… I guess I’m just concerned about opportunity costs.
Like, it seems to me that a tripling of credence in Sudden Emergence shouldn’t change what people do by more than, say, 10%. When you factor in tractability, neglectedness, personal fit, doing things that are beneficial under both Sudden Emergence and non-Sudden Emergence, etc. a factor of 3 in the probability of sudden emergence probably won’t change the bottom line for what 90% of people should be doing with their time. For example, I’m currently working on acausal trade stuff, and I think that if my credence in sudden emergence decreased by a factor of 3 I’d still keep doing what I’m doing.
Meanwhile, I could be working on AI safety directly, or I could be working on acausal trade stuff (which I think could plausibly lead to a more than 10% improvement in EA effort allocation. Or at least, more plausibly than working on Sudden Emergence, it seems to me right now).
I’m very uncertain about all this, of course.
Did you end up writing this post? (I looked through your LW posts since the timestamp of the parent comment but it doesn’t seem like you did.) If not, I would be interested in seeing some sort of outline or short list of points even if you don’t have time to write the full post.
Thanks for following up. Nope, I didn’t write it, but comments like this one and this one are making me bump it up in priority! Maybe it’s what I’ll do next.