Iād be curious if anyone reading this would argue.
I think it would depend a lot on how we operationalise the stance youāre arguing in favour of.
Overall, at the margin, Iām in favour of:
less use of vague-yet-defensible language
EAs/āpeople in general making and using more explicit, quantitative estimates (including probability estimates)
(Iām in favour of these things both in general and when it comes to cause priorisation work.)
But Iām somewhat tentative/āmoderate in those views. For the sake of conversation, Iāll skip stating the arguments in favour of those views, and just focus on the arguments against (or the arguments for tentativenesss/āmoderation).
Essentially, as I outlined in this post (which I know you already read and left useful comments on), I think making, using, and making public quantitative estimates might sometimes:
Cost more time and effort than alternative approaches (such as more qualitative, āall-things-consideredā assessments/ādiscussions)
Exclude some of the estimatorsā knowledge (which couldāve been leveraged by alternative approaches)
Cause overconfidence and/āor cause underestimations of the value of information
Succumb to the optimizerās curse
Cause anchoring
Cause reputational issues
(These downsides wonāt always occur, can sometimes occur more strongly if we use approaches other than quantitative estimates, and can be outweighed by the benefits of quantitative estimates. But here Iām just focusing on āarguments againstā.)
As a result:
I donāt think we should always aim for or require quantitative estimates (including in cause prioritisation work)
I think it may often be wise to combine use of quantitative estimates, formal models, etc. with more intuitive /ā all-things-considered /ā āblack-boxā approaches (see also)
I definitely think some statements/āwork from EAs and rationalists have used quantitative estimates in an overconfident way (sometimes wildly so), and/āor has been treated by others as more certain than it is
Itās plausible to me that this overconfidence problem has not merely co-occurred or correlated with use of quantitative estimates, but that it tends to be exacerbated by that
But Iām not at all certain of that. Using quantitative estimates can sometimes help us see our uncertainty, critique peopleās stances, have reality clearly prove us wrong (well, poorly calibrated), etc.
Relatedly, I think people using quantitative estimates should be very careful to remember how uncertain they are and communicate this clearly
But Iād say the same for most qualitative work in domains like longtermism
Itās plausible to me that the anchoring and/āor reputational issues of making oneās quantitative estimates public outweigh the benefits of doing so (relative to just making more qualitative conclusions and considerations public)
But Iām not at all certain of that (as demonstrated by me making this database)
And I think thisāll depend a lot on how well thought-out oneās estimates are, how well one can communicate uncertainty, what oneās target audiences are, etc.
And it could still be worth making the estimates and not communicating them, or communicating them less publicly
I donāt think this position strongly contrasts with your or Michaelās positions. And indeed Iām a fan of what Iāve seen of both your work, and overall I favour more work like that. But these do seem like nuances/ācaveats worth noting.
Iām not advocating for āpoorly done quantitative estimates.ā I think anyone reasonable would admit that itās possible to bungle them.
Iām definitely not happy with a local optimum of ānot having estimatesā. Itās possible that āhaving a few estimatesā can be worse, but I imagine weāll want to get to the point of āhaving lots of estimates, and becoming more mature to be able to handle them.ā at some point, so thatās the direction to aim for.
I think the ālocal vs global optimaā framing is an interesting way of looking at it.
That reminds me of some of my thinking when I was trying to work out whether itād be net positive to make that database of existential risk estimates (vs it being net negative due to anchoring, reputational issues to EA/ālongtermists, etc.). In particular, a big part of my reasoning was something like:
Itās plausible that itās worse for this database to exist than for there to be no public existential risk estimates. But what really matters is whether itās better that this database exist than that there be a small handful of existential risk estimates, scattered in various different places, and with people often referring to only one set in a given instance (e.g., the 2008 FHI survey), sometimes as if itās the āfinal wordā on the matter.
That situation seems probably even worse from an anchoring and reputational perspective than there being a database. This is because seeing a larger set of estimates side by side could help people see how much disagreement there is and thus have a more appropriate level of uncertainty and humility.
With your comment in mind, Iād now add:
But all of that is just about how good various different present-day situations would be. We should also consider what position we ultimately want to reach.
It seems plausible that we could end up with a larger set of more trustworthy and more independently-made existential risk estimates. And it seems likely that this would be better than the situation weāre in now.
Furthermore, it seems plausible that making this database moves us a step towards that destination. This could be a reason to make the database, even if doing so was slightly counterproductive in the short term.
I think it would depend a lot on how we operationalise the stance youāre arguing in favour of.
Overall, at the margin, Iām in favour of:
less use of vague-yet-defensible language
EAs/āpeople in general making and using more explicit, quantitative estimates (including probability estimates)
(Iām in favour of these things both in general and when it comes to cause priorisation work.)
But Iām somewhat tentative/āmoderate in those views. For the sake of conversation, Iāll skip stating the arguments in favour of those views, and just focus on the arguments against (or the arguments for tentativenesss/āmoderation).
Essentially, as I outlined in this post (which I know you already read and left useful comments on), I think making, using, and making public quantitative estimates might sometimes:
Cost more time and effort than alternative approaches (such as more qualitative, āall-things-consideredā assessments/ādiscussions)
Exclude some of the estimatorsā knowledge (which couldāve been leveraged by alternative approaches)
Cause overconfidence and/āor cause underestimations of the value of information
Succumb to the optimizerās curse
Cause anchoring
Cause reputational issues
(These downsides wonāt always occur, can sometimes occur more strongly if we use approaches other than quantitative estimates, and can be outweighed by the benefits of quantitative estimates. But here Iām just focusing on āarguments againstā.)
As a result:
I donāt think we should always aim for or require quantitative estimates (including in cause prioritisation work)
I think it may often be wise to combine use of quantitative estimates, formal models, etc. with more intuitive /ā all-things-considered /ā āblack-boxā approaches (see also)
I definitely think some statements/āwork from EAs and rationalists have used quantitative estimates in an overconfident way (sometimes wildly so), and/āor has been treated by others as more certain than it is
Itās plausible to me that this overconfidence problem has not merely co-occurred or correlated with use of quantitative estimates, but that it tends to be exacerbated by that
But Iām not at all certain of that. Using quantitative estimates can sometimes help us see our uncertainty, critique peopleās stances, have reality clearly prove us wrong (well, poorly calibrated), etc.
Relatedly, I think people using quantitative estimates should be very careful to remember how uncertain they are and communicate this clearly
But Iād say the same for most qualitative work in domains like longtermism
Itās plausible to me that the anchoring and/āor reputational issues of making oneās quantitative estimates public outweigh the benefits of doing so (relative to just making more qualitative conclusions and considerations public)
But Iām not at all certain of that (as demonstrated by me making this database)
And I think thisāll depend a lot on how well thought-out oneās estimates are, how well one can communicate uncertainty, what oneās target audiences are, etc.
And it could still be worth making the estimates and not communicating them, or communicating them less publicly
I donāt think this position strongly contrasts with your or Michaelās positions. And indeed Iām a fan of what Iāve seen of both your work, and overall I favour more work like that. But these do seem like nuances/ācaveats worth noting.
Nice post. I think I agree with all of that.
Iām not advocating for āpoorly done quantitative estimates.ā I think anyone reasonable would admit that itās possible to bungle them.
Iām definitely not happy with a local optimum of ānot having estimatesā. Itās possible that āhaving a few estimatesā can be worse, but I imagine weāll want to get to the point of āhaving lots of estimates, and becoming more mature to be able to handle them.ā at some point, so thatās the direction to aim for.
I think the ālocal vs global optimaā framing is an interesting way of looking at it.
That reminds me of some of my thinking when I was trying to work out whether itād be net positive to make that database of existential risk estimates (vs it being net negative due to anchoring, reputational issues to EA/ālongtermists, etc.). In particular, a big part of my reasoning was something like:
With your comment in mind, Iād now add: