For instance, I’m worried people will feel bait-and-switched if they get into EA via WWOTF then do an 80,000 Hours call or hang out around their EA university group and realize most people think AI risk is the biggest longtermist priority, many thinking this by a large margin.
I particularly appreciate your point about avoiding ‘bait-and-switch’ dynamics. I appreciate that it’s important to build broad support for a movement, but I ultimately think that it’s crucial to be transparent about what the key considerations and motivations are within longtermism. If, for example, the prospect of ‘digital minds’ is an essential part of how leading people in the movement think about the future, then I think that should be part of public outreach, notwithstanding how offputting or unintuitive it may be. (MacAskill has a comment about excluding the subject here).
One thing I disagreed with.
MacAskill at times seemed reluctant to quantify his best-guess credences, especially in the main text.
I agree it’s good to be transparent about priorities, including regarding the weight placed on AI risk within the movement. But I tend to disagree that it’s so important to share subjective numerical credences and it sometimes has real downsides, especially for extremely speculative subjects. Making implicit beliefs explicit is helpful. But it also causes people to anchor on what may ultimately be an extremely shaky and speculative guess, hindering further independent analysis and leading to long citation trails. For example, I think the “1-in-6” estimate from The Precipice may have led to premature anchoring on that figure, and likely is relied upon too much relative to how speculative it necessarily is.
I appreciate that there are many benefits of sharing numerical credences and you seem like an avid proponent of sharing subjective credences (you do a great job at it in this post!), so we don’t have to agree. I just wanted to highlight one substantial downside of the practice.
Hey Joshua, appreciate you sharing your thoughts (strong upvoted)! I think we actually agree about the effects of sharing numerical credences more than you might think, but disagree about the solution.
But it also causes people to anchor on what may ultimately be an extremely shaky and speculative guess, hindering further independent analysis and leading to long citation trails. For example, I think the “1-in-6” estimate from The Precipice may have led to premature anchoring on that figure, and likely is relied upon too much relative to how speculative it necessarily is.
I agree that this is a substantial downside of sharing numerical credences. I saw it first-hand with people taking the numbers in my previous post more seriously than I had intended (as you also mentioned!)
However, I think there are large benefits to sharing numerical credences, such that the solution isn’t to share credences less but instead to improve the culture around them.
I think we should shift EA’s culture be more favorable of sharing numerical credences even (especially!) when everyone involved knows they’re tentative, brittle, etc. And we should be able to have discussions involving credences and worry less that others will take them too seriously.
I’ve been hopefully contributing to this some, e.g. by describing my motivation for including confidence numbers as: “I decided it was worth it to propose a definition and go ahead and use as many made-up numbers as possible for transparency.” And I’ve attempted to push back when I’ve perceived others as having taken credences/BOTECs I’ve given too seriously in the past.
Some more ideas for shifting the culture around numerical credences:
Use resilience to demonstrate how brittle your beliefs are.
Highlight how much other reasonable people disagree with your credences.
Openly change your mind and publicly shift your credences when new evidence comes in, or someone presents a good counter-argument.
Explicitly encourage others not to cite your numbers, if you believe they are too brittle (you mention this in your other comment).
I’d love to get others’ ideas for shifting the culture here!
Thanks for writing this!
One thing I really agreed with.
I particularly appreciate your point about avoiding ‘bait-and-switch’ dynamics. I appreciate that it’s important to build broad support for a movement, but I ultimately think that it’s crucial to be transparent about what the key considerations and motivations are within longtermism. If, for example, the prospect of ‘digital minds’ is an essential part of how leading people in the movement think about the future, then I think that should be part of public outreach, notwithstanding how offputting or unintuitive it may be. (MacAskill has a comment about excluding the subject here).
One thing I disagreed with.
I agree it’s good to be transparent about priorities, including regarding the weight placed on AI risk within the movement. But I tend to disagree that it’s so important to share subjective numerical credences and it sometimes has real downsides, especially for extremely speculative subjects. Making implicit beliefs explicit is helpful. But it also causes people to anchor on what may ultimately be an extremely shaky and speculative guess, hindering further independent analysis and leading to long citation trails. For example, I think the “1-in-6” estimate from The Precipice may have led to premature anchoring on that figure, and likely is relied upon too much relative to how speculative it necessarily is.
I appreciate that there are many benefits of sharing numerical credences and you seem like an avid proponent of sharing subjective credences (you do a great job at it in this post!), so we don’t have to agree. I just wanted to highlight one substantial downside of the practice.
Hey Joshua, appreciate you sharing your thoughts (strong upvoted)! I think we actually agree about the effects of sharing numerical credences more than you might think, but disagree about the solution.
I agree that this is a substantial downside of sharing numerical credences. I saw it first-hand with people taking the numbers in my previous post more seriously than I had intended (as you also mentioned!)
However, I think there are large benefits to sharing numerical credences, such that the solution isn’t to share credences less but instead to improve the culture around them.
I think we should shift EA’s culture be more favorable of sharing numerical credences even (especially!) when everyone involved knows they’re tentative, brittle, etc. And we should be able to have discussions involving credences and worry less that others will take them too seriously.
I’ve been hopefully contributing to this some, e.g. by describing my motivation for including confidence numbers as: “I decided it was worth it to propose a definition and go ahead and use as many made-up numbers as possible for transparency.” And I’ve attempted to push back when I’ve perceived others as having taken credences/BOTECs I’ve given too seriously in the past.
Some more ideas for shifting the culture around numerical credences:
Use resilience to demonstrate how brittle your beliefs are.
Highlight how much other reasonable people disagree with your credences.
Openly change your mind and publicly shift your credences when new evidence comes in, or someone presents a good counter-argument.
Explicitly encourage others not to cite your numbers, if you believe they are too brittle (you mention this in your other comment).
I’d love to get others’ ideas for shifting the culture here!
Oh, and I also quite liked your section on ‘the balance of positive vs negative value in current lives’!