[writing in my personal capacity, but asked an 80k colleague if it seemed fine for me to post this]
Thanks a lot for writing this—I agree with a lot of (most of?) of what’s here.
One thing I’m a bit unsure of is the extent to which these worries have implications for the beliefs of those of us who are hovering more around 5% x-risk this century from AI, and who are one step removed from the bay area epistemic and social environment you write about. My guess is that they don’t have much implication for most of us, because (though what you say is way better articulated) some of this is already naturally getting into people’s estimates.
e.g. in my case, basically I think a lot of what you’re writing about is sort of why for my all-things-considered beliefs I partly “defer at a discount” to people who know a ton about AI and have high x-risk estimates. Like I take their arguments, find them pretty persuasive, end up at some lower but still middlingly high probability, and then just sort of downgrade everything because of worries like the ones you cite, which I think is part of why I end up near 5%.
This kind of thing does have the problematic effect probably of incentivising the bay area folks to have more and more extreme probabilities—so that, to the extent that they care, quasi-normies like me will end up with a higher probability—closer to the truth in their view—after deferring at a discount.
The 5% figure seems pretty common, and I think this might also be a symptom of risk inflation.
There is a huge degree of uncertainty around this topic. The factors involved in any prediction very by many orders of magnitude, so it seems like we should expect the estimates to vary by orders of magnitude as well. So you might get some people saying the odds are 1 in 20, or 1 in 1000, or 1 in a million, and I don’t see how any of those estimates can be ruled out as unreasonable. Yet I hardly see anyone giving estimates of 0.1% or 0.001%.
I think people are using 5% as a stand in for “can’t rule it out”. Like why did you settle at 1 in 20 instead of 1 in a thousand?
Your last point about exaggeration incentives seems like an incentive that could exist, but I don’t see it playing out
For 80kh itself, considerations such as in this post might apply to career advisors, who have the tricky job of balancing charismatic persuasion with just providing evidence and stepping back when they try to help people make better career decisions.
[context: I’m one of the advisors, and manage some of the others, but am describing my individual attitude below]
FWIW I don’t think the balance you indicated is that tricky, and think that conceiving of what I’m doing when I speak to people as ‘charismatic persuasion’ would be a big mistake for me to make. I try to:
Say things I think are true, and explain why I think them (both the internal logic and external evidence if it exists) and how confident I am.
Ask people questions in a way which helps them clarify what they think is true, and which things they are more or less sure of.
Make tradeoffs (e.g. between a location preference and a desire for a particular job) explicit to people who I think might be missing that they need to make one, but usually not then suggesting which tradeoff to make, but instead that they go and think about it/talk to other people affected by it.
Encourage people to think through things for themselves, usually suggesting resources which will help them do that/give a useful perspective as well as just saying ‘this seems worth you taking time to think about’.
To the extent that I’m paying attention to how other people perceive me[1], I’m usually trying to work out how to stop people deferring to me when they shouldn’t without running into the “confidence all the way up” issue.
in a work context, that is. I’m unfortunately usually pretty anxious about, and therefore paying a bunch of attention to, whether people are angry/upset with me, though this is getting better, and easy to mostly ‘switch off’ on calls because the person in front of me takes my full attention.
[writing in my personal capacity, but asked an 80k colleague if it seemed fine for me to post this]
Thanks a lot for writing this—I agree with a lot of (most of?) of what’s here.
One thing I’m a bit unsure of is the extent to which these worries have implications for the beliefs of those of us who are hovering more around 5% x-risk this century from AI, and who are one step removed from the bay area epistemic and social environment you write about. My guess is that they don’t have much implication for most of us, because (though what you say is way better articulated) some of this is already naturally getting into people’s estimates.
e.g. in my case, basically I think a lot of what you’re writing about is sort of why for my all-things-considered beliefs I partly “defer at a discount” to people who know a ton about AI and have high x-risk estimates. Like I take their arguments, find them pretty persuasive, end up at some lower but still middlingly high probability, and then just sort of downgrade everything because of worries like the ones you cite, which I think is part of why I end up near 5%.
This kind of thing does have the problematic effect probably of incentivising the bay area folks to have more and more extreme probabilities—so that, to the extent that they care, quasi-normies like me will end up with a higher probability—closer to the truth in their view—after deferring at a discount.
The 5% figure seems pretty common, and I think this might also be a symptom of risk inflation.
There is a huge degree of uncertainty around this topic. The factors involved in any prediction very by many orders of magnitude, so it seems like we should expect the estimates to vary by orders of magnitude as well. So you might get some people saying the odds are 1 in 20, or 1 in 1000, or 1 in a million, and I don’t see how any of those estimates can be ruled out as unreasonable. Yet I hardly see anyone giving estimates of 0.1% or 0.001%.
I think people are using 5% as a stand in for “can’t rule it out”. Like why did you settle at 1 in 20 instead of 1 in a thousand?
It looks like we landed on the same thought. User Muster the Squirrels quoted your comment in a reply to my comment on ACX.
Hey,
Your last point about exaggeration incentives seems like an incentive that could exist, but I don’t see it playing out
For 80kh itself, considerations such as in this post might apply to career advisors, who have the tricky job of balancing charismatic persuasion with just providing evidence and stepping back when they try to help people make better career decisions.
[context: I’m one of the advisors, and manage some of the others, but am describing my individual attitude below]
FWIW I don’t think the balance you indicated is that tricky, and think that conceiving of what I’m doing when I speak to people as ‘charismatic persuasion’ would be a big mistake for me to make. I try to:
Say things I think are true, and explain why I think them (both the internal logic and external evidence if it exists) and how confident I am.
Ask people questions in a way which helps them clarify what they think is true, and which things they are more or less sure of.
Make tradeoffs (e.g. between a location preference and a desire for a particular job) explicit to people who I think might be missing that they need to make one, but usually not then suggesting which tradeoff to make, but instead that they go and think about it/talk to other people affected by it.
Encourage people to think through things for themselves, usually suggesting resources which will help them do that/give a useful perspective as well as just saying ‘this seems worth you taking time to think about’.
To the extent that I’m paying attention to how other people perceive me[1], I’m usually trying to work out how to stop people deferring to me when they shouldn’t without running into the “confidence all the way up” issue.
in a work context, that is. I’m unfortunately usually pretty anxious about, and therefore paying a bunch of attention to, whether people are angry/upset with me, though this is getting better, and easy to mostly ‘switch off’ on calls because the person in front of me takes my full attention.