I think that it is a good epistemic practice not to discount arguments without analysing them.
The fact that I am very interested in one outcome may help my to find very powerful arguments, for example, in AI case, it is my pre-commitment to ask any future benevolent AI to run many simulations of any Rogue AIs. The idea was suggested by R.Nelson in 2007. Such pre-commitment is not information which AI may be able to create itself, but it informs it about the actual situation in the outside world, which could affect the total number of paperclips it will be able to create.
I think that it is a good epistemic practice not to discount arguments without analysing them.
The fact that I am very interested in one outcome may help my to find very powerful arguments, for example, in AI case, it is my pre-commitment to ask any future benevolent AI to run many simulations of any Rogue AIs. The idea was suggested by R.Nelson in 2007. Such pre-commitment is not information which AI may be able to create itself, but it informs it about the actual situation in the outside world, which could affect the total number of paperclips it will be able to create.