You can browse our papers and research summaries here and see if anything clicks, but failing that, I’m not sure there’s any simple heuristic I can suggest beyond “look for lots of separate lines of indirect evidence.” One question is whether we’re working on the right problems for addressing AI risk. Relevant indicators that come to mind include:
Stuart Russell’s alignment research group is interested in value learning and “theories of (bounded) rationality,” as well as corrigibility (1, 2).
A number of our research proposals were cited in FLI’s research priorities document, and our agent foundations agenda received one of the larger FLI grants.
The Open Philanthropy Project’s research advisors don’t think logical uncertainty, decision theory, or Vingean reflection are likely to be safety-relevant.
The “Concrete Problems in AI Safety” agenda has some overlap with our research interests and goals (e.g., avoiding wireheading).
A separate question is whether we’re making reasonable progress on those problems, given that they’re the right problems. Relevant indicators that come to mind:
An OpenPhil external reviewer described our HOL-in-HOL result as “an important milestone toward formal analysis of systems with some level of self-understanding.”
OpenPhil’s internal and external reviewers considered a set of preliminary MIRI results leading to logical induction unimpressive.
Our reflective oracles framework was presented at a top AI conference, UAI.
Scott Aaronson thinks “Logical Induction” is important and theoretically interesting.
We haven’t had any significant public endorsements of our work on decision theory by leading decision theorists.
… and so on.
If you don’t trust MIRI or yourself to assess the situation, I don’t think there’s any shortcut besides trying to gather and weigh miscellaneous pieces of evidence. (Possibly the conclusion will be that some parts of MIRI’s research are useful and others aren’t, since we work on a pretty diverse set of problems.)
You can browse our papers and research summaries here and see if anything clicks, but failing that, I’m not sure there’s any simple heuristic I can suggest beyond “look for lots of separate lines of indirect evidence.” One question is whether we’re working on the right problems for addressing AI risk. Relevant indicators that come to mind include:
Stuart Russell’s alignment research group is interested in value learning and “theories of (bounded) rationality,” as well as corrigibility (1, 2).
A number of our research proposals were cited in FLI’s research priorities document, and our agent foundations agenda received one of the larger FLI grants.
FHI and DeepMind have collaborated on corrigibility work.
The Open Philanthropy Project’s research advisors don’t think logical uncertainty, decision theory, or Vingean reflection are likely to be safety-relevant.
The “Concrete Problems in AI Safety” agenda has some overlap with our research interests and goals (e.g., avoiding wireheading).
A separate question is whether we’re making reasonable progress on those problems, given that they’re the right problems. Relevant indicators that come to mind:
An OpenPhil external reviewer described our HOL-in-HOL result as “an important milestone toward formal analysis of systems with some level of self-understanding.”
OpenPhil’s internal and external reviewers considered a set of preliminary MIRI results leading to logical induction unimpressive.
Our reflective oracles framework was presented at a top AI conference, UAI.
Scott Aaronson thinks “Logical Induction” is important and theoretically interesting.
We haven’t had any significant public endorsements of our work on decision theory by leading decision theorists.
… and so on.
If you don’t trust MIRI or yourself to assess the situation, I don’t think there’s any shortcut besides trying to gather and weigh miscellaneous pieces of evidence. (Possibly the conclusion will be that some parts of MIRI’s research are useful and others aren’t, since we work on a pretty diverse set of problems.)