Can you give an example or two of failure modes or “categories of failure modes that are easy to foresee” that you think are addressed by some HRAD topic? I’d thought previously that thinking in terms of failure modes wasn’t a good way to understand HRAD research.
I want to steer clear of language that might make it sound like we’re saying:
X ‘We can’t make broad-strokes predictions about likely ways that AGI could go wrong.’
X ‘To the extent we can make such predictions, they aren’t important for informing research directions.’
X ‘The best way to address AGI risk is just to try to advance our understanding of AGI in a general and fairly undirected way.’
The things I do want to communicate are:
All of MIRI’s research decisions are heavily informed by a background view in which there
are many important categories of predictable failure, e.g., ‘the system is steering toward edges of the solution space’, ‘the function the system is optimizing correlates with the intended function at lower capability levels but comes uncorrelated at high capability levels’, ‘the system has incentives to obfuscate and mislead programmers to the extent it models its programmers’ beliefs and expects false programmer beliefs to result in it better-optimizing its objective function.’
The main case for HRAD problems is that we expect them to help in a gestalt way with many
different known failure modes (and, plausibly, unknown ones). E.g., ‘developing a basic understanding of counterfactual reasoning improves our ability to understand the first AGI systems in a general way, and if we understand AGI better it’s likelier we can build systems to address deception, edge instantiation, goal instability, and a number of other problems’.
There usually isn’t a simple relationship between a particular open problem and a particular failure mode, but if we thought there were no way to predict in advance any of the ways AGI systems can go wrong, or if we thought a very different set of failures were likely instead, we’d have different research priorities.
I want to steer clear of language that might make it sound like we’re saying:
X ‘We can’t make broad-strokes predictions about likely ways that AGI could go wrong.’
X ‘To the extent we can make such predictions, they aren’t important for informing research directions.’
X ‘The best way to address AGI risk is just to try to advance our understanding of AGI in a general and fairly undirected way.’
The things I do want to communicate are:
All of MIRI’s research decisions are heavily informed by a background view in which there are many important categories of predictable failure, e.g., ‘the system is steering toward edges of the solution space’, ‘the function the system is optimizing correlates with the intended function at lower capability levels but comes uncorrelated at high capability levels’, ‘the system has incentives to obfuscate and mislead programmers to the extent it models its programmers’ beliefs and expects false programmer beliefs to result in it better-optimizing its objective function.’
The main case for HRAD problems is that we expect them to help in a gestalt way with many different known failure modes (and, plausibly, unknown ones). E.g., ‘developing a basic understanding of counterfactual reasoning improves our ability to understand the first AGI systems in a general way, and if we understand AGI better it’s likelier we can build systems to address deception, edge instantiation, goal instability, and a number of other problems’.
There usually isn’t a simple relationship between a particular open problem and a particular failure mode, but if we thought there were no way to predict in advance any of the ways AGI systems can go wrong, or if we thought a very different set of failures were likely instead, we’d have different research priorities.