Be wary of setting a trap where there’s no possible way for you to take claims of high p(doom) seriously, because when someone gives more arguments for doom than for hope you assume they’re trying to trick you by filtering out secret strong reasons for hope, and when someone gives you similar numbers of arguments for doom and for hope you assume they can’t really think p(doom) is that high.
I briefly touched on this at the end of the post and in this comment thread. In short:
Eehh, you can’t just ignore your evidence being filtered
Strong kinds of evidence, e.g., empirical evidence, mathematical proof, very compelling arguments would still move my needle. Weak or fuzzy arguments much less
I can still process evidence from my own eyes, e.g., observe progress, tap into sources that I think are less filtered, think about this for myself, etc.
I can still “take claims of high p(doom) seriously” in the sense of believing that people reporting them hold that as a sincere belief.
Though that doesn’t necessarily inspire a compulsion to defer to those beliefs.
That all seems right to me, and compatible with what I was saying. The part of Sphor’s comment that seemed off to me was “against a much larger corpus of writings and communications by you and MIRI emphasizing risks from AGI”: one blog post is a small data point to weigh against lots of other data points, but the relevant data to weigh it against isn’t “MIRI wrote other things that emphasize risks from AGI” in isolation, as though “an organization or individual wrote a lot of arguments for X” on its own is strong reason to discount those arguments as filtered.
The thing doing the work has to be some background model of the arguers (or of some process upstream of the arguers), not a raw count of how often someone argues for a thing. Otherwise you run into the “damned if you argue a lot for X, damned if you don’t argue a lot for X” problem.
I briefly touched on this at the end of the post and in this comment thread. In short:
Eehh, you can’t just ignore your evidence being filtered
Strong kinds of evidence, e.g., empirical evidence, mathematical proof, very compelling arguments would still move my needle. Weak or fuzzy arguments much less
I can still process evidence from my own eyes, e.g., observe progress, tap into sources that I think are less filtered, think about this for myself, etc.
I can still “take claims of high p(doom) seriously” in the sense of believing that people reporting them hold that as a sincere belief.
Though that doesn’t necessarily inspire a compulsion to defer to those beliefs.
That all seems right to me, and compatible with what I was saying. The part of Sphor’s comment that seemed off to me was “against a much larger corpus of writings and communications by you and MIRI emphasizing risks from AGI”: one blog post is a small data point to weigh against lots of other data points, but the relevant data to weigh it against isn’t “MIRI wrote other things that emphasize risks from AGI” in isolation, as though “an organization or individual wrote a lot of arguments for X” on its own is strong reason to discount those arguments as filtered.
The thing doing the work has to be some background model of the arguers (or of some process upstream of the arguers), not a raw count of how often someone argues for a thing. Otherwise you run into the “damned if you argue a lot for X, damned if you don’t argue a lot for X” problem.