The end goal is to prevent global catastrophes, but if a safety-conscious AGI team asked how we’d expect their project to fail, the two likeliest scenarios we’d point to are “your team runs into a capabilities roadblock and can’t achieve AGI” or “your team runs into an alignment roadblock and can easily tell that the system is currently misaligned, but can’t figure out how to achieve alignment in any reasonable amount of time.”
This is particularly helpful to know.
We worry about “unknown unknowns”, but I’d probably give them less emphasis here. We often focus on categories of failure modes that we think are easy to foresee. As a rule of thumb, when we prioritize a basic research problem, it’s because we expect it to help in a general way with understanding AGI systems and make it easier to address many different failure modes (both foreseen and unforeseen), rather than because of a one-to-one correspondence between particular basic research problems and particular failure modes.
Can you give an example or two of failure modes or “categories of failure modes that are easy to foresee” that you think are addressed by some HRAD topic? I’d thought previously that thinking in terms of failure modes wasn’t a good way to understand HRAD research.
As an example, the reason we work on logical uncertainty isn’t that we’re visualizing a concrete failure that we think is highly likely to occur if developers don’t understand logical uncertainty. We work on this problem because any system reasoning in a realistic way about the physical world will need to reason under both logical and empirical uncertainty, and because we expect broadly understanding how the system is reasoning about the world to be important for ensuring that the optimization processes inside the system are aligned with the intended objectives of the operators.
I’m confused by this as a follow-up to the previous paragraph. This doesn’t look like an example of “focusing on categories of failure modes that are easy to foresee,” it looks like a case where you’re explicitly not using concrete failure modes to decide what to work on.
“how do we ensure the system’s cognitive work is being directed at solving the right problems, and at solving them in the desired way?”
I feel like this fits with the “not about concrete failure modes” narrative that I believed before reading your comment, FWIW.
Can you give an example or two of failure modes or “categories of failure modes that are easy to foresee” that you think are addressed by some HRAD topic? I’d thought previously that thinking in terms of failure modes wasn’t a good way to understand HRAD research.
I want to steer clear of language that might make it sound like we’re saying:
X ‘We can’t make broad-strokes predictions about likely ways that AGI could go wrong.’
X ‘To the extent we can make such predictions, they aren’t important for informing research directions.’
X ‘The best way to address AGI risk is just to try to advance our understanding of AGI in a general and fairly undirected way.’
The things I do want to communicate are:
All of MIRI’s research decisions are heavily informed by a background view in which there
are many important categories of predictable failure, e.g., ‘the system is steering toward edges of the solution space’, ‘the function the system is optimizing correlates with the intended function at lower capability levels but comes uncorrelated at high capability levels’, ‘the system has incentives to obfuscate and mislead programmers to the extent it models its programmers’ beliefs and expects false programmer beliefs to result in it better-optimizing its objective function.’
The main case for HRAD problems is that we expect them to help in a gestalt way with many
different known failure modes (and, plausibly, unknown ones). E.g., ‘developing a basic understanding of counterfactual reasoning improves our ability to understand the first AGI systems in a general way, and if we understand AGI better it’s likelier we can build systems to address deception, edge instantiation, goal instability, and a number of other problems’.
There usually isn’t a simple relationship between a particular open problem and a particular failure mode, but if we thought there were no way to predict in advance any of the ways AGI systems can go wrong, or if we thought a very different set of failures were likely instead, we’d have different research priorities.
Thanks Nate!
This is particularly helpful to know.
Can you give an example or two of failure modes or “categories of failure modes that are easy to foresee” that you think are addressed by some HRAD topic? I’d thought previously that thinking in terms of failure modes wasn’t a good way to understand HRAD research.
I’m confused by this as a follow-up to the previous paragraph. This doesn’t look like an example of “focusing on categories of failure modes that are easy to foresee,” it looks like a case where you’re explicitly not using concrete failure modes to decide what to work on.
I feel like this fits with the “not about concrete failure modes” narrative that I believed before reading your comment, FWIW.
I want to steer clear of language that might make it sound like we’re saying:
X ‘We can’t make broad-strokes predictions about likely ways that AGI could go wrong.’
X ‘To the extent we can make such predictions, they aren’t important for informing research directions.’
X ‘The best way to address AGI risk is just to try to advance our understanding of AGI in a general and fairly undirected way.’
The things I do want to communicate are:
All of MIRI’s research decisions are heavily informed by a background view in which there are many important categories of predictable failure, e.g., ‘the system is steering toward edges of the solution space’, ‘the function the system is optimizing correlates with the intended function at lower capability levels but comes uncorrelated at high capability levels’, ‘the system has incentives to obfuscate and mislead programmers to the extent it models its programmers’ beliefs and expects false programmer beliefs to result in it better-optimizing its objective function.’
The main case for HRAD problems is that we expect them to help in a gestalt way with many different known failure modes (and, plausibly, unknown ones). E.g., ‘developing a basic understanding of counterfactual reasoning improves our ability to understand the first AGI systems in a general way, and if we understand AGI better it’s likelier we can build systems to address deception, edge instantiation, goal instability, and a number of other problems’.
There usually isn’t a simple relationship between a particular open problem and a particular failure mode, but if we thought there were no way to predict in advance any of the ways AGI systems can go wrong, or if we thought a very different set of failures were likely instead, we’d have different research priorities.