The difference in many object level claims, like the probability that there will be an intelligence explosion and so on, is not very much between EAs and AI researchers. This survey demonstrated it: https://arxiv.org/abs/1705.08807
AI researchers are just more likely to have an attitude that anything less than ~10% likely to occur should be ignored, or existential risks are not orders of magnitude more important than other things, or similar kinds of judgement calls.
The one major technical issue where EAs might be systematically different from AI researchers would be the validity of current research in addressing the problem.
Is there any data on how likely EAs think that explosive progress after HLMI will happen? I would have thought it more than 10%?
I would also have expected more debate about explosive progress, more than just the recent Hanson-Yudkowski flair up, if there was as much doubt in the community as that survey suggests.
The difference in many object level claims, like the probability that there will be an intelligence explosion and so on, is not very much between EAs and AI researchers. This survey demonstrated it: https://arxiv.org/abs/1705.08807
AI researchers are just more likely to have an attitude that anything less than ~10% likely to occur should be ignored, or existential risks are not orders of magnitude more important than other things, or similar kinds of judgement calls.
The one major technical issue where EAs might be systematically different from AI researchers would be the validity of current research in addressing the problem.
Is there any data on how likely EAs think that explosive progress after HLMI will happen? I would have thought it more than 10%?
I would also have expected more debate about explosive progress, more than just the recent Hanson-Yudkowski flair up, if there was as much doubt in the community as that survey suggests.