Sort of. I mean, I support efforts to prioritize between catastrophic or existential risks, or more searching, e.g., for alternative or multilateral approaches to A.I. risk relative to just supporting MIRI’s research agenda.
Sort of. I mean, I support efforts to prioritize between catastrophic or existential risks, or more searching, e.g., for alternative or multilateral approaches to A.I. risk relative to just supporting MIRI’s research agenda.