Within EA, work on x-risk is very siloed by type of threat: There are the AI people, the bio people, etc. Is this bad, or good?
Which of these is the correct analogy?
“Biology is to science as AI safety is to x-risk,” or
“Immunology is to biology as AI safety is to x-risk”
EAs seem to implicitly think analogy 1 is correct: some interdisciplinary work is nice (biophysics) but most biologists can just be biologists (i.e. most AI x-risk people can just do AI).
The “existential risk studies” model (popular with CSER, SERI, and lots of other non-EA academics) seems to think that analogy 2 is correct, and that interdisciplinary work is totally critical—immunologists alone cannot achieve a useful understanding of the entire system they’re trying to study, and they need to exchange ideas with other subfields of medicine/biology in order to have an impact, i.e. AI x-risk workers are missing critical pieces of the puzzle when they neglect broader x-risk studies.
Within EA, work on x-risk is very siloed by type of threat: There are the AI people, the bio people, etc. Is this bad, or good?
Which of these is the correct analogy?
“Biology is to science as AI safety is to x-risk,” or
“Immunology is to biology as AI safety is to x-risk”
EAs seem to implicitly think analogy 1 is correct: some interdisciplinary work is nice (biophysics) but most biologists can just be biologists (i.e. most AI x-risk people can just do AI).
The “existential risk studies” model (popular with CSER, SERI, and lots of other non-EA academics) seems to think that analogy 2 is correct, and that interdisciplinary work is totally critical—immunologists alone cannot achieve a useful understanding of the entire system they’re trying to study, and they need to exchange ideas with other subfields of medicine/biology in order to have an impact, i.e. AI x-risk workers are missing critical pieces of the puzzle when they neglect broader x-risk studies.