Within EA, work on x-risk is very siloed by type of threat: There are the AI people, the bio people, etc. Is this bad, or good?
Which of these is the correct analogy?
âBiology is to science as AI safety is to x-risk,â or
âImmunology is to biology as AI safety is to x-riskâ
EAs seem to implicitly think analogy 1 is correct: some interdisciplinary work is nice (biophysics) but most biologists can just be biologists (i.e. most AI x-risk people can just do AI).
The âexistential risk studiesâ model (popular with CSER, SERI, and lots of other non-EA academics) seems to think that analogy 2 is correct, and that interdisciplinary work is totally criticalâimmunologists alone cannot achieve a useful understanding of the entire system theyâre trying to study, and they need to exchange ideas with other subfields of medicine/âbiology in order to have an impact, i.e. AI x-risk workers are missing critical pieces of the puzzle when they neglect broader x-risk studies.
Within EA, work on x-risk is very siloed by type of threat: There are the AI people, the bio people, etc. Is this bad, or good?
Which of these is the correct analogy?
âBiology is to science as AI safety is to x-risk,â or
âImmunology is to biology as AI safety is to x-riskâ
EAs seem to implicitly think analogy 1 is correct: some interdisciplinary work is nice (biophysics) but most biologists can just be biologists (i.e. most AI x-risk people can just do AI).
The âexistential risk studiesâ model (popular with CSER, SERI, and lots of other non-EA academics) seems to think that analogy 2 is correct, and that interdisciplinary work is totally criticalâimmunologists alone cannot achieve a useful understanding of the entire system theyâre trying to study, and they need to exchange ideas with other subfields of medicine/âbiology in order to have an impact, i.e. AI x-risk workers are missing critical pieces of the puzzle when they neglect broader x-risk studies.