FWIW, I donât think ârisksâ is quite the right word: sure, if we discover a risk which was so powerful and so tractable that we end up overwhelming the good done by our original intervention, that obviously matters. But the really important thing there, for me at least, is the fact that we apparently have a new and very powerful lever for impacting the world. As a result, I would care just as much about a benefit which in the medium term would end up being worth >>1x the original target good (e.g. âGive Directly reduces extinction risk by reducing poverty, a known cause of conflictâ); the surprisingly-high magnitude of an incidental impact is what is really catching my attention, because it suggests there are much better ways to do good.
(Apologies in advance Iâm rehashing unhelpfully)
The usual cluelessness scenarios are more about that there may be powerful lever for impacting the future, and your intended intervention may be pulling it in the wrong direction (rather than a âconfirmed discoveryâ). Say your expectation for the EV of GiveDirectly on conflict has a distribution with a mean of zero but an SD of 10x the magnitude of the benefits you had previously estimated. If it were (e.g.) +10, thereâs a natural response of âshouldnât we try something which targets this on purpose?â; if it were 0, we wouldnât attend to it further; if it meant you were â10, you wouldnât give to (now net EV = â-9â) GiveDirectly.
The right response where all three scenarios are credible (plus all the intermediates) but youâre unsure which one youâre in isnât intuitively obvious (at least to me). Even if (like me) youâre sympathetic to pretty doctrinaire standard EV accounts (i.e. you quantify this uncertainty + all the others and just ârun the numbersâ and take the best EV) this approach seems to ignore this wide variance, which seems to be worthy of further attention.
The OP tries to reconcile this with the standard approach by saying this indeed often should be attended to, but under the guise of value of information rather than something âextraâ to orthodoxy. Even though we should still go with our best guess if we to decide (so expectation neutral but high variance terms âcancel outâ), we might have the option to postpone our decision and improve our guesswork. Whether to take that option should be governed by how resilient our uncertainty is. If your central estimate of GiveDirectly and conflict would move on average by 2 units if you spent an hour thinking about it, that seems an hour well spent; if you thought you could spend a decade on it and remain where you are, going with the current best guess looks better.
This can be put in plain(er) English (although familiar-to-EA jargon like âEVâ may remain). Yet there are reasons to be hesitant about the orthodox approach (even though I think the case in favour is ultimately stronger): besides the usual bullets, we would be kidding ourselves if we ever really had in our head an uncertainty distribution to arbitrary precision, and maybe our uncertainty isnât even remotely approximate to objects we manipulate in standard models of the same. Or (owed to Andreas) even if so, similar to how rule-consequentialism may be better than act-consequentialism, some other epistemic policy would get better results than applying the orthodox approach in these cases of deep uncertainty.
Insofar as folks are more sympathetic to this, they would not want to be deflationary and perhaps urge investment in new techniques/âvocab to grapple with the problem. They may also think we donât have a good âanswerâ yet of what to do in these situations, so may hesitate to give âaccept thereâs uncertainty but donât be paralysed by itâ advice that you and I would. Maybe these issues are an open problem we should try and figure out better before pressing on.
(Apologies in advance Iâm rehashing unhelpfully)
The usual cluelessness scenarios are more about that there may be powerful lever for impacting the future, and your intended intervention may be pulling it in the wrong direction (rather than a âconfirmed discoveryâ). Say your expectation for the EV of GiveDirectly on conflict has a distribution with a mean of zero but an SD of 10x the magnitude of the benefits you had previously estimated. If it were (e.g.) +10, thereâs a natural response of âshouldnât we try something which targets this on purpose?â; if it were 0, we wouldnât attend to it further; if it meant you were â10, you wouldnât give to (now net EV = â-9â) GiveDirectly.
The right response where all three scenarios are credible (plus all the intermediates) but youâre unsure which one youâre in isnât intuitively obvious (at least to me). Even if (like me) youâre sympathetic to pretty doctrinaire standard EV accounts (i.e. you quantify this uncertainty + all the others and just ârun the numbersâ and take the best EV) this approach seems to ignore this wide variance, which seems to be worthy of further attention.
The OP tries to reconcile this with the standard approach by saying this indeed often should be attended to, but under the guise of value of information rather than something âextraâ to orthodoxy. Even though we should still go with our best guess if we to decide (so expectation neutral but high variance terms âcancel outâ), we might have the option to postpone our decision and improve our guesswork. Whether to take that option should be governed by how resilient our uncertainty is. If your central estimate of GiveDirectly and conflict would move on average by 2 units if you spent an hour thinking about it, that seems an hour well spent; if you thought you could spend a decade on it and remain where you are, going with the current best guess looks better.
This can be put in plain(er) English (although familiar-to-EA jargon like âEVâ may remain). Yet there are reasons to be hesitant about the orthodox approach (even though I think the case in favour is ultimately stronger): besides the usual bullets, we would be kidding ourselves if we ever really had in our head an uncertainty distribution to arbitrary precision, and maybe our uncertainty isnât even remotely approximate to objects we manipulate in standard models of the same. Or (owed to Andreas) even if so, similar to how rule-consequentialism may be better than act-consequentialism, some other epistemic policy would get better results than applying the orthodox approach in these cases of deep uncertainty.
Insofar as folks are more sympathetic to this, they would not want to be deflationary and perhaps urge investment in new techniques/âvocab to grapple with the problem. They may also think we donât have a good âanswerâ yet of what to do in these situations, so may hesitate to give âaccept thereâs uncertainty but donât be paralysed by itâ advice that you and I would. Maybe these issues are an open problem we should try and figure out better before pressing on.