I have some sympathy with ‘a simple utilitarian CBA doesn’t suffice’ in general, but I do not end at your conclusion; your intuition pump also doesn’t lead me there.
It doesn’t seem to require any staunch utilitarianism to arrive at ‘if a quick look at the gun design suggests it has 51% to shoot in your own face, and only 49% to shoot at the tiger you want to hunt as you otherwise starve to death’*, to decide to drop the project of it’s development. Or, to halt, until a more detailed examination might allow you to update with a more precise understanding.
You mention that with AI we have ‘abstract arguments’, to which my gun’s simple failure probability may not do full justice. But I think not much changes, even if your skepticism about the gun would be as abstract or intangible as ‘err, somehow it just doesn’t seem quite right, I cannot even quite perfectly pin down why, but overall the design doesn’t make me trust; maybe it explodes in my hand, it burns me, it’s smoke might make me fall ill, whatever, I just don’t trust it; i really don’t know, but HAVING TAKEN ALL EVIDENCE AND LIVE EXPERIENCE, incl. the smartest EA and LW posts and all, I guess, 51% I get the harm, and only 49% the equivalent benefit, one way or another’ - as long as it’s still truly the best estimate you can do at the moment.
The (potential) fact that we more typically have found new technologies to advance us, does very little work in changing that conclusion, though, of course, in a complicated case as in AI, this observation itself may have informed some of our cost-benefit reflections.
*Yes you guessed correctly, I better implicitly assume something like, you have 50% of survival w/o catching the tiger, and 100% with him (and you only care about your survival) to really arrive at the intended ‘slightly negative in the cost-benefit comparison’; so take the thought experiment as an unnecessarily complicated quick and dirty one, but I think it still makes the simple point.
In my thought experiment, we generally have a moral and legal presumption against censorship, which I argued should weigh heavily in our decision-making. By contrast, in your thought experiment with the tiger, I see no salient reason for why we should have a presumption to shoot the tiger now rather than wait until we have more information. For that reason, I don’t think that your comment is responding to my argument about how we should weigh heuristics against simple cost-benefit analyses.
In the case of an AI pause, the current law is not consistent with a non-voluntary pause. Moreover, from an elementary moral perspective, inventing a new rule and forcing everyone to follow it generally requires some justification. There is no symmetry here between action vs. inaction as there would be in the case of deciding whether to shoot the tiger right now. If you don’t see why, consider whether you would have had a presumption against pausing just about any other technology, such as bicycles, until they were proven safe.
My point is not that AI is just as safe as bicycles, or that we should disregard cost-benefit analyses. Instead, I am trying to point out that cost-benefit analyses can often be flawed, and relying on heuristics is frequently highly rational even when they disagree with naive cost-benefit analyses.
I tried to account for the difficulty to pin down all relevant effects in our CBA by adding the somewhat intangible feeling about the gun to backfire (standing for your point that there may be more general/typical but less easy to quantify benefits of not censoring etc.). Sorry, if that was not clear.
More importantly:
I think your last paragraph gets to the essence: You’re afraid the cost-benefit analysis is done naively, potentially ignoring the good reasons for which we most often may not want to try to prevent the advancement of science/tech.
This does, however, not imply that for pausing we’d require Pause Benefit >> Pause Cost. Instead, it means, simply you’re wary of certain values for E[Pause Benefit] (or of E[Pause Cost]) to be potentially biased in a particular direction, so that you don’t trust in conclusions based on them. Of course, if we expect a particular bias of our benefit or our cost estimate, we cannot just use the wrong estimates.
When I’m advocating to be even-handed, I refer to a cost-benefit comparison that is non-naive. That is, if we have priors that there may exist positive effects that we’ve just not yet managed to pin down well, or to quantify, we have (i) used reasonable placeholders for these, avoiding bias as good as we can, and (ii) duly widened our uncertainty intervals. It is therefore, that in the end, we can remain even-handed, i.e. pause roughly iif E[Pause Benefit] > E[Pause Cost]. Or, if you like, iif E[Pause Benefit*] > E[Pause Cost*], with * = Accounting with all duty of care for the fact that you’d usually not want to stop your professor or so/usually not want to stop tech advancements because of yadayada..
I have some sympathy with ‘a simple utilitarian CBA doesn’t suffice’ in general, but I do not end at your conclusion; your intuition pump also doesn’t lead me there.
It doesn’t seem to require any staunch utilitarianism to arrive at ‘if a quick look at the gun design suggests it has 51% to shoot in your own face, and only 49% to shoot at the tiger you want to hunt as you otherwise starve to death’*, to decide to drop the project of it’s development. Or, to halt, until a more detailed examination might allow you to update with a more precise understanding.
You mention that with AI we have ‘abstract arguments’, to which my gun’s simple failure probability may not do full justice. But I think not much changes, even if your skepticism about the gun would be as abstract or intangible as ‘err, somehow it just doesn’t seem quite right, I cannot even quite perfectly pin down why, but overall the design doesn’t make me trust; maybe it explodes in my hand, it burns me, it’s smoke might make me fall ill, whatever, I just don’t trust it; i really don’t know, but HAVING TAKEN ALL EVIDENCE AND LIVE EXPERIENCE, incl. the smartest EA and LW posts and all, I guess, 51% I get the harm, and only 49% the equivalent benefit, one way or another’ - as long as it’s still truly the best estimate you can do at the moment.
The (potential) fact that we more typically have found new technologies to advance us, does very little work in changing that conclusion, though, of course, in a complicated case as in AI, this observation itself may have informed some of our cost-benefit reflections.
*Yes you guessed correctly, I better implicitly assume something like, you have 50% of survival w/o catching the tiger, and 100% with him (and you only care about your survival) to really arrive at the intended ‘slightly negative in the cost-benefit comparison’; so take the thought experiment as an unnecessarily complicated quick and dirty one, but I think it still makes the simple point.
In my thought experiment, we generally have a moral and legal presumption against censorship, which I argued should weigh heavily in our decision-making. By contrast, in your thought experiment with the tiger, I see no salient reason for why we should have a presumption to shoot the tiger now rather than wait until we have more information. For that reason, I don’t think that your comment is responding to my argument about how we should weigh heuristics against simple cost-benefit analyses.
In the case of an AI pause, the current law is not consistent with a non-voluntary pause. Moreover, from an elementary moral perspective, inventing a new rule and forcing everyone to follow it generally requires some justification. There is no symmetry here between action vs. inaction as there would be in the case of deciding whether to shoot the tiger right now. If you don’t see why, consider whether you would have had a presumption against pausing just about any other technology, such as bicycles, until they were proven safe.
My point is not that AI is just as safe as bicycles, or that we should disregard cost-benefit analyses. Instead, I am trying to point out that cost-benefit analyses can often be flawed, and relying on heuristics is frequently highly rational even when they disagree with naive cost-benefit analyses.
I tried to account for the difficulty to pin down all relevant effects in our CBA by adding the somewhat intangible feeling about the gun to backfire (standing for your point that there may be more general/typical but less easy to quantify benefits of not censoring etc.). Sorry, if that was not clear.
More importantly:
I think your last paragraph gets to the essence: You’re afraid the cost-benefit analysis is done naively, potentially ignoring the good reasons for which we most often may not want to try to prevent the advancement of science/tech.
This does, however, not imply that for pausing we’d require Pause Benefit >> Pause Cost. Instead, it means, simply you’re wary of certain values for E[Pause Benefit] (or of E[Pause Cost]) to be potentially biased in a particular direction, so that you don’t trust in conclusions based on them. Of course, if we expect a particular bias of our benefit or our cost estimate, we cannot just use the wrong estimates.
When I’m advocating to be even-handed, I refer to a cost-benefit comparison that is non-naive. That is, if we have priors that there may exist positive effects that we’ve just not yet managed to pin down well, or to quantify, we have (i) used reasonable placeholders for these, avoiding bias as good as we can, and (ii) duly widened our uncertainty intervals. It is therefore, that in the end, we can remain even-handed, i.e. pause roughly iif E[Pause Benefit] > E[Pause Cost]. Or, if you like, iif E[Pause Benefit*] > E[Pause Cost*], with * = Accounting with all duty of care for the fact that you’d usually not want to stop your professor or so/usually not want to stop tech advancements because of yadayada..