MIRI primarily does research too though. Do you mean you prefer to support cause prioritization research?
Sort of. I mean, I support efforts to prioritize between catastrophic or existential risks, or more searching, e.g., for alternative or multilateral approaches to A.I. risk relative to just supporting MIRI’s research agenda.
MIRI primarily does research too though. Do you mean you prefer to support cause prioritization research?
Sort of. I mean, I support efforts to prioritize between catastrophic or existential risks, or more searching, e.g., for alternative or multilateral approaches to A.I. risk relative to just supporting MIRI’s research agenda.