She’s unsure whether this speeds up or slows down AI development; her credence is imprecise, represented by the interval [0.4, 0.6]. She’s confident, let’s say, that speeding up AI development is bad.
That’s an awfully (in)convenient interval to have! That is the unique position for an interval of that length with no distinguishing views about any parts of the interval, such that integrating over it gives you a probability of 0.5 and expected impact of 0.
The standard response to that is that you should weigh all these and do what is in expectation best, according to your best-guess credences. But maybe we just don’t have sufficiently fine-grained credences for this to work,
If the argument from cluelessness depends on giving that kind of special status to imprecise credences, then I just reject them for the general reason that coarsening credences leads to worse decisions and predictions (particularly if one has done basic calibration training and has some numeracy and skill at prediction). There is signal to be lost in coarsening on individual questions. And for compound questions with various premises or contributing factors making use of the signal on each of those means your views will be moved by signal.
Chapter 3 of Jeffrey Friedman’s book War and Chance: Assessing Uncertainty in International Politics presents data and arguments showing large losses from coarsening credences instead of just giving a number between 0 and 1. I largely share his negative sentiments about imprecise credences, especially.
[VOI considerations around less investigated credences that are more likely to be moved by investigation are fruitful grounds to delay action to acquire or await information that one expects may be actually attained, but are not the same thing as imprecise credences.]
(In contrast, it seems you thought I was referring to AI vs some other putative great longtermist intervention. I agree that plausible longtermist rivals to AI and bio are thin on the ground.)
That was an example of the phenomenon of not searching a supposedly vast space and finding that in fact the # of top-level considerations are manageable (at least compared to thousands), based off experience with other people saying that there must be thousands of similarly plausible risks. I would likewise say that the DeepMind employee in your example doesn’t face thousands upon thousands of ballpark-similar distinct considerations to assess.
That’s an awfully (in)convenient interval to have! That is the unique position for an interval of that length with no distinguishing views about any parts of the interval, such that integrating over it gives you a probability of 0.5 and expected impact of 0.
If the argument from cluelessness depends on giving that kind of special status to imprecise credences, then I just reject them for the general reason that coarsening credences leads to worse decisions and predictions (particularly if one has done basic calibration training and has some numeracy and skill at prediction). There is signal to be lost in coarsening on individual questions. And for compound questions with various premises or contributing factors making use of the signal on each of those means your views will be moved by signal.
Chapter 3 of Jeffrey Friedman’s book War and Chance: Assessing Uncertainty in International Politics presents data and arguments showing large losses from coarsening credences instead of just giving a number between 0 and 1. I largely share his negative sentiments about imprecise credences, especially.
[VOI considerations around less investigated credences that are more likely to be moved by investigation are fruitful grounds to delay action to acquire or await information that one expects may be actually attained, but are not the same thing as imprecise credences.]
That was an example of the phenomenon of not searching a supposedly vast space and finding that in fact the # of top-level considerations are manageable (at least compared to thousands), based off experience with other people saying that there must be thousands of similarly plausible risks. I would likewise say that the DeepMind employee in your example doesn’t face thousands upon thousands of ballpark-similar distinct considerations to assess.