I think the ālocal vs global optimaā framing is an interesting way of looking at it.
That reminds me of some of my thinking when I was trying to work out whether itād be net positive to make that database of existential risk estimates (vs it being net negative due to anchoring, reputational issues to EA/ālongtermists, etc.). In particular, a big part of my reasoning was something like:
Itās plausible that itās worse for this database to exist than for there to be no public existential risk estimates. But what really matters is whether itās better that this database exist than that there be a small handful of existential risk estimates, scattered in various different places, and with people often referring to only one set in a given instance (e.g., the 2008 FHI survey), sometimes as if itās the āfinal wordā on the matter.
That situation seems probably even worse from an anchoring and reputational perspective than there being a database. This is because seeing a larger set of estimates side by side could help people see how much disagreement there is and thus have a more appropriate level of uncertainty and humility.
With your comment in mind, Iād now add:
But all of that is just about how good various different present-day situations would be. We should also consider what position we ultimately want to reach.
It seems plausible that we could end up with a larger set of more trustworthy and more independently-made existential risk estimates. And it seems likely that this would be better than the situation weāre in now.
Furthermore, it seems plausible that making this database moves us a step towards that destination. This could be a reason to make the database, even if doing so was slightly counterproductive in the short term.
I think the ālocal vs global optimaā framing is an interesting way of looking at it.
That reminds me of some of my thinking when I was trying to work out whether itād be net positive to make that database of existential risk estimates (vs it being net negative due to anchoring, reputational issues to EA/ālongtermists, etc.). In particular, a big part of my reasoning was something like:
With your comment in mind, Iād now add: