Sure! It would depend on what you mean by “an argument against AI risk”:
If you mean “What’s the main argument that makes you more optimistic about AI outcomes?”, I made a list of these in 2018.
If you mean “What’s the likeliest way you think it could turn out that aligning AGI is unnecessary in order to do a pivotal act / initiate an as-long-as-needed reflection?”, I’d currently guess it’s using strong narrow-AI systems to accelerate you to Drexlerian nanotechnology (which can then be used to build powerful things like “large numbers of fast-running human whole-brain emulations”).
If you mean “What’s the likeliest way you think it could turn out that humanity’s current trajectory is basically OK / no huge actions or trajectory changes are required?”, I’d say that the likeliest scenario is one where AGI kills all humans, but this isn’t a complete catastrophe for the future value of the reachable universe because the AGI turns out to be less like a paperclip maximizer and more like a weird sentient alien that wants to fill the universe with extremely-weird-but-awesome alien civilizations. This sort of scenario is discussed in Superintelligent AI is necessary for an amazing future, but far from sufficient.
If you mean “What’s the likeliest way you think it could turn out that EAs are focusing too much on AI and should focus on something else instead?”, I’d guess it’s if we should focus more on biotech. E.g., this conjunction could turn out to be true: (1) AGI is 40+ years away; (2) by default, it will be easy for small groups of crazies to kill all humans with biotech in 20 years; and (3) EAs could come up with important new ways to avoid disaster if we made this a larger focus (though it’s already a reasonably large focus in EA).
Another way it could be bad that EAs are focusing on AI is if EAs are accelerating AGI capabilities / shortening timelines way more than we’re helping with alignment (or otherwise increasing the probability of good outcomes).
Sure! It would depend on what you mean by “an argument against AI risk”:
If you mean “What’s the main argument that makes you more optimistic about AI outcomes?”, I made a list of these in 2018.
If you mean “What’s the likeliest way you think it could turn out that aligning AGI is unnecessary in order to do a pivotal act / initiate an as-long-as-needed reflection?”, I’d currently guess it’s using strong narrow-AI systems to accelerate you to Drexlerian nanotechnology (which can then be used to build powerful things like “large numbers of fast-running human whole-brain emulations”).
If you mean “What’s the likeliest way you think it could turn out that humanity’s current trajectory is basically OK / no huge actions or trajectory changes are required?”, I’d say that the likeliest scenario is one where AGI kills all humans, but this isn’t a complete catastrophe for the future value of the reachable universe because the AGI turns out to be less like a paperclip maximizer and more like a weird sentient alien that wants to fill the universe with extremely-weird-but-awesome alien civilizations. This sort of scenario is discussed in Superintelligent AI is necessary for an amazing future, but far from sufficient.
If you mean “What’s the likeliest way you think it could turn out that EAs are focusing too much on AI and should focus on something else instead?”, I’d guess it’s if we should focus more on biotech. E.g., this conjunction could turn out to be true: (1) AGI is 40+ years away; (2) by default, it will be easy for small groups of crazies to kill all humans with biotech in 20 years; and (3) EAs could come up with important new ways to avoid disaster if we made this a larger focus (though it’s already a reasonably large focus in EA).
Another way it could be bad that EAs are focusing on AI is if EAs are accelerating AGI capabilities / shortening timelines way more than we’re helping with alignment (or otherwise increasing the probability of good outcomes).