Thanks, I really haven’t given sufficient thought to the cluelessness section which seems the most novel and tricky. Fanaticism is probably just as important, if not more so, but is also easier to get one’s head around.
I agree with you in your other comment though that the following seems to imply that the authors are not “complexly clueless” about AI safety:
For example, we don’t think any reasonable representor even contains a probability function according to which efforts to mitigate AI risk save only 0.001 lives per $100 in expectation.
I mean I guess it is probably the case that if you’re saying it’s unreasonable for a probability function associated with very small positive expected value to be contained in your representor, you’ll also say a probability function associated with negative expected value also isn’t contained in it. This does seem to me to be a slightly extreme view.
Ya, maybe your representor should be a convex set, so that for any two functions in it, you can take any probabilistic mixture of them, and that would also be in your representor. This way, if you have one with expected value x and another with expected value y, you should have functions with each possible expected value between. So, if you have positive and negative EVs in your representor, you would also have 0 EV.
Do you mean negative EV is slightly extreme or ruling out negative EV is slightly extreme?
I think neglecting to look into and address ways something could be negative (e.g. a probability difference, EV) often leads us to unjustifiably assuming a positive lower bound, and I think this is an easy mistake to make or miss. Combining a positive lower bound with astronomical stakes would make the argument appear very compelling.
Thanks, I really haven’t given sufficient thought to the cluelessness section which seems the most novel and tricky. Fanaticism is probably just as important, if not more so, but is also easier to get one’s head around.
I agree with you in your other comment though that the following seems to imply that the authors are not “complexly clueless” about AI safety:
I mean I guess it is probably the case that if you’re saying it’s unreasonable for a probability function associated with very small positive expected value to be contained in your representor, you’ll also say a probability function associated with negative expected value also isn’t contained in it. This does seem to me to be a slightly extreme view.
Ya, maybe your representor should be a convex set, so that for any two functions in it, you can take any probabilistic mixture of them, and that would also be in your representor. This way, if you have one with expected value x and another with expected value y, you should have functions with each possible expected value between. So, if you have positive and negative EVs in your representor, you would also have 0 EV.
Do you mean negative EV is slightly extreme or ruling out negative EV is slightly extreme?
I think neglecting to look into and address ways something could be negative (e.g. a probability difference, EV) often leads us to unjustifiably assuming a positive lower bound, and I think this is an easy mistake to make or miss. Combining a positive lower bound with astronomical stakes would make the argument appear very compelling.
Yeah I meant ruling out negative EV in a representor may be slightly extreme, but I’m not really sure—I need to read more.