Here’s a dynamic that I’ve seen pop up more than once.
Person A says that an outcome they judge to be bad will occur with high probability, while making a claim of the form “but I don’t want (e.g.) alignment to be doomed — it would be a huge relief if I’m wrong!”
It seems uncontroversial that Person A would like to be shown that they’re wrong in a way that vindicates their initial forecast as ex ante reasonable.
It seems more controversial whether Person A would like to be shown that their prediction was wrong, in a way that also shows their initial prediction to have been ex ante unreasonable.
In my experience, it’s much easier to acknowledge that you were wrong about some specific belief (or the probability of some outcome), than it is to step back and acknowledge that the reasoning process which led you to your initial statement was misfiring. Even pessimistic beliefs can be (in Ozzie’s language) “convenient beliefs” to hold.
If we identify ourselves with our ability to think carefully, coming to believe that there are errors in our reasoning process can hit us much more personally than updates about errors in our conclusions. Optimistic updates might be an update towards me thinking that my projects have been less worthwhile than I thought, that my local community is less effective than I thought, or that my background framework or worldview was in error. I think these updates can be especially painful for people who are more liable to identify with their ability to reason well, or identify with the unusual merits of their chosen community.
To clarify: I’m not claiming that people with more pessimistic conclusions are, in general, more likely to be making reasoning errors. Obviously there are plenty of incentives towards believing rosier conclusions. I’m simply claiming that: if someone arrives at a pessimistic conclusion based on faulty reasoning, then you shouldn’t necessarily expect optimistic pushback to be uniformly welcomed— for all of the standard reasons that updates of the form “I could’ve done better on a task I care about” can be hard to accept.
Also, sometimes the people saying this have their identity, reputation, or their career tied to the pessimistic belief. They might consciously prefer the optimistic outcome, but I reckon emotionally/subconsciously the bias would clearly be in the other direction.
Here’s a dynamic that I’ve seen pop up more than once.
Person A says that an outcome they judge to be bad will occur with high probability, while making a claim of the form “but I don’t want (e.g.) alignment to be doomed — it would be a huge relief if I’m wrong!”
It seems uncontroversial that Person A would like to be shown that they’re wrong in a way that vindicates their initial forecast as ex ante reasonable.
It seems more controversial whether Person A would like to be shown that their prediction was wrong, in a way that also shows their initial prediction to have been ex ante unreasonable.
In my experience, it’s much easier to acknowledge that you were wrong about some specific belief (or the probability of some outcome), than it is to step back and acknowledge that the reasoning process which led you to your initial statement was misfiring. Even pessimistic beliefs can be (in Ozzie’s language) “convenient beliefs” to hold.
If we identify ourselves with our ability to think carefully, coming to believe that there are errors in our reasoning process can hit us much more personally than updates about errors in our conclusions. Optimistic updates might be an update towards me thinking that my projects have been less worthwhile than I thought, that my local community is less effective than I thought, or that my background framework or worldview was in error. I think these updates can be especially painful for people who are more liable to identify with their ability to reason well, or identify with the unusual merits of their chosen community.
To clarify: I’m not claiming that people with more pessimistic conclusions are, in general, more likely to be making reasoning errors. Obviously there are plenty of incentives towards believing rosier conclusions. I’m simply claiming that: if someone arrives at a pessimistic conclusion based on faulty reasoning, then you shouldn’t necessarily expect optimistic pushback to be uniformly welcomed— for all of the standard reasons that updates of the form “I could’ve done better on a task I care about” can be hard to accept.
Also, sometimes the people saying this have their identity, reputation, or their career tied to the pessimistic belief. They might consciously prefer the optimistic outcome, but I reckon emotionally/subconsciously the bias would clearly be in the other direction.