If, for instance, one had credences such that the expected number of future people was only 10^14, the status quo probability of catastrophe from AI was only 0.001%, and the proportion by which $1 billion of careful spending would reduce this risk was also only 0.001%, then one would judge spending on AI safety equivalent to saving only 0.001 lives per $100 – less than the near-future benefits of bednets. But this constellation of conditions seems unreasonable.
(...)
For example, we don’t think any reasonable representor even contains a probability function according to which efforts to mitigate AI risk save only 0.001 lives per $100 in expectation.
This isn’t central so they don’t elaborate much, but they are assuming here that we will not do more harm than good in expectation if we spend “carefully”, and that seems arbitrary and unreasonable to me. See some discussion here.
This isn’t central so they don’t elaborate much, but they are assuming here that we will not do more harm than good in expectation if we spend “carefully”, and that seems arbitrary and unreasonable to me. See some discussion here.