How to manage deep uncertainty over the long-run ramifications of ones decisions is a challenge across EA-landâparticularly acute for longtermists, but also elsewhere: most would care about risks about how in the medium term a charitable intervention could prove counter-productive
This makes some sense to me, although if thatâs all weâre talking about Iâd prefer to use plain English since the concept is fairly common. I think this is not all other people are talking about though; see my discussion with MichaelStJules.
FWIW, I donât think ârisksâ is quite the right word: sure, if we discover a risk which was so powerful and so tractable that we end up overwhelming the good done by our original intervention, that obviously matters. But the really important thing there, for me at least, is the fact that we apparently have a new and very powerful lever for impacting the world. As a result, I would care just as much about a benefit which in the medium term would end up being worth >>1x the original target good (e.g. âGive Directly reduces extinction risk by reducing poverty, a known cause of conflictâ); the surprisingly-high magnitude of an incidental impact is what is really catching my attention, because it suggests there are much better ways to do good.
Although I donât see this with AMF, I do see this in and around animal advocacy. One crucial consideration around here is WAS, particularly an âinverse logic of the larderâ (see), such as âper area, a factory farm has a lower intensity of animal suffering than the environment it replacedâ.
I think you were trying to draw a distinction, but FWIW this feels structurally similar to the âAMF impact population growth/âeconomic growthâ argument to me, and I would give structually the same response: once you truly believe a factory farm net reduced animal suffering via the wild environment it incidentally destroyed, there are presumably much more efficient ways to destroy wild environments. As a result, it appears we can benefit wild animals much more than farmed animals, and ending factory farming should disappear from your priority list, at least as an end goal (it may come back as an itself-incidental consequence of e.g. âpromoting as much concern for animal welfare as possibleâ). Is your point just that it does not in fact disappear from peopleâs priority lists in this case? That Iâm not well-placed to observe or comment on either way.
b) Early (or motivated) stopping across crucial considerations.
This I agree is a problem. Iâm not sure if thinking in terms of cluelessness makes it better or worse; Iâve had a few conversations now where my interlocutor tries to avoid the challenge of cluelessness by presenting an intervention that supposedly has no complex cluelessness attached. So far, Iâve been unconvinced of every case and think said interlocutor is âstopping earlyâ and missing aspects of impact about which they are complexly clueless (often economic/âpopulation growth impacts, since itâs actually quite hard to come up with an intervention that doesnât credibly impact one of those).
I guess I think part of encouraging people to continue thinking rather than stop involves getting people comfortable with the fact that thereâs a perfectly reasonable chance that what they end up doing backfires, everything has risk attached, and trying to entirely avoid such is both a foolâs errand and a quick path to analysis paralysis. Currently, my impression is that cluelessness-as-used is pushing towards avoidance rather than acceptance, but the sample size is small and so this opinion is very weakly held. I would be more positive on people thinking about this if it seemed to help push them towards acceptance though.
Given my fairly deflationary OP, I donât think these problems are best described as cluelessness
Point taken. Given that I hope this wasnât too much of a hijack, or at least was an interesting hijack. I think I misunderstood how literally you intended the statements I quoted and disagreed with in my original comment.
FWIW, I donât think ârisksâ is quite the right word: sure, if we discover a risk which was so powerful and so tractable that we end up overwhelming the good done by our original intervention, that obviously matters. But the really important thing there, for me at least, is the fact that we apparently have a new and very powerful lever for impacting the world. As a result, I would care just as much about a benefit which in the medium term would end up being worth >>1x the original target good (e.g. âGive Directly reduces extinction risk by reducing poverty, a known cause of conflictâ); the surprisingly-high magnitude of an incidental impact is what is really catching my attention, because it suggests there are much better ways to do good.
(Apologies in advance Iâm rehashing unhelpfully)
The usual cluelessness scenarios are more about that there may be powerful lever for impacting the future, and your intended intervention may be pulling it in the wrong direction (rather than a âconfirmed discoveryâ). Say your expectation for the EV of GiveDirectly on conflict has a distribution with a mean of zero but an SD of 10x the magnitude of the benefits you had previously estimated. If it were (e.g.) +10, thereâs a natural response of âshouldnât we try something which targets this on purpose?â; if it were 0, we wouldnât attend to it further; if it meant you were â10, you wouldnât give to (now net EV = â-9â) GiveDirectly.
The right response where all three scenarios are credible (plus all the intermediates) but youâre unsure which one youâre in isnât intuitively obvious (at least to me). Even if (like me) youâre sympathetic to pretty doctrinaire standard EV accounts (i.e. you quantify this uncertainty + all the others and just ârun the numbersâ and take the best EV) this approach seems to ignore this wide variance, which seems to be worthy of further attention.
The OP tries to reconcile this with the standard approach by saying this indeed often should be attended to, but under the guise of value of information rather than something âextraâ to orthodoxy. Even though we should still go with our best guess if we to decide (so expectation neutral but high variance terms âcancel outâ), we might have the option to postpone our decision and improve our guesswork. Whether to take that option should be governed by how resilient our uncertainty is. If your central estimate of GiveDirectly and conflict would move on average by 2 units if you spent an hour thinking about it, that seems an hour well spent; if you thought you could spend a decade on it and remain where you are, going with the current best guess looks better.
This can be put in plain(er) English (although familiar-to-EA jargon like âEVâ may remain). Yet there are reasons to be hesitant about the orthodox approach (even though I think the case in favour is ultimately stronger): besides the usual bullets, we would be kidding ourselves if we ever really had in our head an uncertainty distribution to arbitrary precision, and maybe our uncertainty isnât even remotely approximate to objects we manipulate in standard models of the same. Or (owed to Andreas) even if so, similar to how rule-consequentialism may be better than act-consequentialism, some other epistemic policy would get better results than applying the orthodox approach in these cases of deep uncertainty.
Insofar as folks are more sympathetic to this, they would not want to be deflationary and perhaps urge investment in new techniques/âvocab to grapple with the problem. They may also think we donât have a good âanswerâ yet of what to do in these situations, so may hesitate to give âaccept thereâs uncertainty but donât be paralysed by itâ advice that you and I would. Maybe these issues are an open problem we should try and figure out better before pressing on.
This makes some sense to me, although if thatâs all weâre talking about Iâd prefer to use plain English since the concept is fairly common. I think this is not all other people are talking about though; see my discussion with MichaelStJules.
FWIW, I donât think ârisksâ is quite the right word: sure, if we discover a risk which was so powerful and so tractable that we end up overwhelming the good done by our original intervention, that obviously matters. But the really important thing there, for me at least, is the fact that we apparently have a new and very powerful lever for impacting the world. As a result, I would care just as much about a benefit which in the medium term would end up being worth >>1x the original target good (e.g. âGive Directly reduces extinction risk by reducing poverty, a known cause of conflictâ); the surprisingly-high magnitude of an incidental impact is what is really catching my attention, because it suggests there are much better ways to do good.
I think you were trying to draw a distinction, but FWIW this feels structurally similar to the âAMF impact population growth/âeconomic growthâ argument to me, and I would give structually the same response: once you truly believe a factory farm net reduced animal suffering via the wild environment it incidentally destroyed, there are presumably much more efficient ways to destroy wild environments. As a result, it appears we can benefit wild animals much more than farmed animals, and ending factory farming should disappear from your priority list, at least as an end goal (it may come back as an itself-incidental consequence of e.g. âpromoting as much concern for animal welfare as possibleâ). Is your point just that it does not in fact disappear from peopleâs priority lists in this case? That Iâm not well-placed to observe or comment on either way.
This I agree is a problem. Iâm not sure if thinking in terms of cluelessness makes it better or worse; Iâve had a few conversations now where my interlocutor tries to avoid the challenge of cluelessness by presenting an intervention that supposedly has no complex cluelessness attached. So far, Iâve been unconvinced of every case and think said interlocutor is âstopping earlyâ and missing aspects of impact about which they are complexly clueless (often economic/âpopulation growth impacts, since itâs actually quite hard to come up with an intervention that doesnât credibly impact one of those).
I guess I think part of encouraging people to continue thinking rather than stop involves getting people comfortable with the fact that thereâs a perfectly reasonable chance that what they end up doing backfires, everything has risk attached, and trying to entirely avoid such is both a foolâs errand and a quick path to analysis paralysis. Currently, my impression is that cluelessness-as-used is pushing towards avoidance rather than acceptance, but the sample size is small and so this opinion is very weakly held. I would be more positive on people thinking about this if it seemed to help push them towards acceptance though.
Point taken. Given that I hope this wasnât too much of a hijack, or at least was an interesting hijack. I think I misunderstood how literally you intended the statements I quoted and disagreed with in my original comment.
(Apologies in advance Iâm rehashing unhelpfully)
The usual cluelessness scenarios are more about that there may be powerful lever for impacting the future, and your intended intervention may be pulling it in the wrong direction (rather than a âconfirmed discoveryâ). Say your expectation for the EV of GiveDirectly on conflict has a distribution with a mean of zero but an SD of 10x the magnitude of the benefits you had previously estimated. If it were (e.g.) +10, thereâs a natural response of âshouldnât we try something which targets this on purpose?â; if it were 0, we wouldnât attend to it further; if it meant you were â10, you wouldnât give to (now net EV = â-9â) GiveDirectly.
The right response where all three scenarios are credible (plus all the intermediates) but youâre unsure which one youâre in isnât intuitively obvious (at least to me). Even if (like me) youâre sympathetic to pretty doctrinaire standard EV accounts (i.e. you quantify this uncertainty + all the others and just ârun the numbersâ and take the best EV) this approach seems to ignore this wide variance, which seems to be worthy of further attention.
The OP tries to reconcile this with the standard approach by saying this indeed often should be attended to, but under the guise of value of information rather than something âextraâ to orthodoxy. Even though we should still go with our best guess if we to decide (so expectation neutral but high variance terms âcancel outâ), we might have the option to postpone our decision and improve our guesswork. Whether to take that option should be governed by how resilient our uncertainty is. If your central estimate of GiveDirectly and conflict would move on average by 2 units if you spent an hour thinking about it, that seems an hour well spent; if you thought you could spend a decade on it and remain where you are, going with the current best guess looks better.
This can be put in plain(er) English (although familiar-to-EA jargon like âEVâ may remain). Yet there are reasons to be hesitant about the orthodox approach (even though I think the case in favour is ultimately stronger): besides the usual bullets, we would be kidding ourselves if we ever really had in our head an uncertainty distribution to arbitrary precision, and maybe our uncertainty isnât even remotely approximate to objects we manipulate in standard models of the same. Or (owed to Andreas) even if so, similar to how rule-consequentialism may be better than act-consequentialism, some other epistemic policy would get better results than applying the orthodox approach in these cases of deep uncertainty.
Insofar as folks are more sympathetic to this, they would not want to be deflationary and perhaps urge investment in new techniques/âvocab to grapple with the problem. They may also think we donât have a good âanswerâ yet of what to do in these situations, so may hesitate to give âaccept thereâs uncertainty but donât be paralysed by itâ advice that you and I would. Maybe these issues are an open problem we should try and figure out better before pressing on.