(apologies for slowness; I’m not here much)
I’d say it’s more about being willing to update on less direct evidence when the risk of getting more direct evidence is high.
Clearly we should aim to get more evidence. The question is how to best do that safely. At present we seem to be taking the default path—of gathering evidence in about the easiest way, rather than going for something harder, slower and safer. (e.g. all the “we need to work with frontier models” stuff; I do expect that’s most efficient on the empirical side; I don’t expect it’s a safe approach)
Joe Collman
In principle, we do the same thing as with any claim (whether explicitly or otherwise):
- Estimate the expected value of (directly) testing the claim.
- Test it if and only if (directly) testing it has positive EV.
The point here isn’t that the claim is special, or that AI is special—just that the EV calculation consistently comes out negative (unless someone else is about to do something even more dangerous—hence the need for coordination).
This is unusual and inconvenient. It appears to be the hand we’ve been dealt.
I think you’re asking the right question: what is one supposed to do with a claim that can’t be empirically tested?
I broadly agree with this, but at least with AI safety there’s a Goodharting issue: we don’t want AIS researchers optimising for legibly impressive ideas/results/writeups.
I assume there’s a similar-in-principle issue for most cause areas, but it does seem markedly worse for AIS. (given lack of meaningful feedback on the most important issues)
There’s a significant downside even in having some proportion of EA AIS researchers focus on more legible results: it gives a warped impression of useful AIS research to outsiders. This happens by default, since there are many incentives to pick a legibly impressive line of research, and there’ll be more engagement with more readable content.
None of this is to say that I know e.g. MIRI-style research to be the right approach.
However, I do think we need to be careful not to optimise for the appearance of strong object level work.
I think a lot depends on whether we’re:
Aiming to demonstrate that deception can happen.
Aiming to robustly avoid deception.
For demonstration, we can certainly do useful empirical stuff—ARC Evals already did the lying-to-a-taskrabbit worker demonstration (clearly this isn’t anything like deceptive alignment, but it’s deception [given suitable scaffolding]).
I think that other demonstrations of this kind will be useful in the short term.
For avoiding all forms of deception, I’m much more pessimistic—since this requires us to have no blind-spots, and to address the problem in a fundamentally general way. (personally I doubt there’s a [general solution to all kinds of deception] without some pretty general alignment solution—though I may be wrong)
I’m sure we’ll come up with solutions to particular types of / definitions of deception in particular contexts. This doesn’t necessarily tell us much about other types of deception in other contexts. (for example, this kind of thing—but not only this kind of thing)
I’d also note that “reducing the uncertainty” is only progress when we’re correct. The problem that kills us isn’t uncertainty, but overconfidence. (though granted it might be someone else’s overconfidence)