I thought I’d offer up more object-level examples to try to push against your view. AI risk is a case in which EAs disagree with the consensus among numerous AI researchers and other intelligent people. In my view, a lot of the arguments I’ve heard from AI researchers have been very weak and haven’t shifted my credence all that much. But modesty here seems to push me toward the consensus to a greater extent than the object-level reasons warrant.
With respect to the question of AI risk, it seems to me that I should demote these people from my epistemic peer group because they disagree with me on the subject of AI risk. If you accept this, then its hard to see what difference there is between immodesty and modesty
The difference in many object level claims, like the probability that there will be an intelligence explosion and so on, is not very much between EAs and AI researchers. This survey demonstrated it: https://arxiv.org/abs/1705.08807
AI researchers are just more likely to have an attitude that anything less than ~10% likely to occur should be ignored, or existential risks are not orders of magnitude more important than other things, or similar kinds of judgement calls.
The one major technical issue where EAs might be systematically different from AI researchers would be the validity of current research in addressing the problem.
Is there any data on how likely EAs think that explosive progress after HLMI will happen? I would have thought it more than 10%?
I would also have expected more debate about explosive progress, more than just the recent Hanson-Yudkowski flair up, if there was as much doubt in the community as that survey suggests.
I thought I’d offer up more object-level examples to try to push against your view. AI risk is a case in which EAs disagree with the consensus among numerous AI researchers and other intelligent people. In my view, a lot of the arguments I’ve heard from AI researchers have been very weak and haven’t shifted my credence all that much. But modesty here seems to push me toward the consensus to a greater extent than the object-level reasons warrant.
With respect to the question of AI risk, it seems to me that I should demote these people from my epistemic peer group because they disagree with me on the subject of AI risk. If you accept this, then its hard to see what difference there is between immodesty and modesty
The difference in many object level claims, like the probability that there will be an intelligence explosion and so on, is not very much between EAs and AI researchers. This survey demonstrated it: https://arxiv.org/abs/1705.08807
AI researchers are just more likely to have an attitude that anything less than ~10% likely to occur should be ignored, or existential risks are not orders of magnitude more important than other things, or similar kinds of judgement calls.
The one major technical issue where EAs might be systematically different from AI researchers would be the validity of current research in addressing the problem.
Is there any data on how likely EAs think that explosive progress after HLMI will happen? I would have thought it more than 10%?
I would also have expected more debate about explosive progress, more than just the recent Hanson-Yudkowski flair up, if there was as much doubt in the community as that survey suggests.