How to manage deep uncertainty over the long-run ramifications of ones decisions is a challenge across EA-land—particularly acute for longtermists, but also elsewhere: most would care about risks about how in the medium term a charitable intervention could prove counter-productive
This makes some sense to me, although if that’s all we’re talking about I’d prefer to use plain English since the concept is fairly common. I think this is not all other people are talking about though; see my discussion with MichaelStJules.
FWIW, I don’t think ‘risks’ is quite the right word: sure, if we discover a risk which was so powerful and so tractable that we end up overwhelming the good done by our original intervention, that obviously matters. But the really important thing there, for me at least, is the fact that we apparently have a new and very powerful lever for impacting the world. As a result, I would care just as much about a benefit which in the medium term would end up being worth >>1x the original target good (e.g. “Give Directly reduces extinction risk by reducing poverty, a known cause of conflict”); the surprisingly-high magnitude of an incidental impact is what is really catching my attention, because it suggests there are much better ways to do good.
Although I don’t see this with AMF, I do see this in and around animal advocacy. One crucial consideration around here is WAS, particularly an ‘inverse logic of the larder’ (see), such as “per area, a factory farm has a lower intensity of animal suffering than the environment it replaced”.
I think you were trying to draw a distinction, but FWIW this feels structurally similar to the ‘AMF impact population growth/economic growth’ argument to me, and I would give structually the same response: once you truly believe a factory farm net reduced animal suffering via the wild environment it incidentally destroyed, there are presumably much more efficient ways to destroy wild environments. As a result, it appears we can benefit wild animals much more than farmed animals, and ending factory farming should disappear from your priority list, at least as an end goal (it may come back as an itself-incidental consequence of e.g. ‘promoting as much concern for animal welfare as possible’). Is your point just that it does not in fact disappear from people’s priority lists in this case? That I’m not well-placed to observe or comment on either way.
b) Early (or motivated) stopping across crucial considerations.
This I agree is a problem. I’m not sure if thinking in terms of cluelessness makes it better or worse; I’ve had a few conversations now where my interlocutor tries to avoid the challenge of cluelessness by presenting an intervention that supposedly has no complex cluelessness attached. So far, I’ve been unconvinced of every case and think said interlocutor is ‘stopping early’ and missing aspects of impact about which they are complexly clueless (often economic/population growth impacts, since it’s actually quite hard to come up with an intervention that doesn’t credibly impact one of those).
I guess I think part of encouraging people to continue thinking rather than stop involves getting people comfortable with the fact that there’s a perfectly reasonable chance that what they end up doing backfires, everything has risk attached, and trying to entirely avoid such is both a fool’s errand and a quick path to analysis paralysis. Currently, my impression is that cluelessness-as-used is pushing towards avoidance rather than acceptance, but the sample size is small and so this opinion is very weakly held. I would be more positive on people thinking about this if it seemed to help push them towards acceptance though.
Given my fairly deflationary OP, I don’t think these problems are best described as cluelessness
Point taken. Given that I hope this wasn’t too much of a hijack, or at least was an interesting hijack. I think I misunderstood how literally you intended the statements I quoted and disagreed with in my original comment.
FWIW, I don’t think ‘risks’ is quite the right word: sure, if we discover a risk which was so powerful and so tractable that we end up overwhelming the good done by our original intervention, that obviously matters. But the really important thing there, for me at least, is the fact that we apparently have a new and very powerful lever for impacting the world. As a result, I would care just as much about a benefit which in the medium term would end up being worth >>1x the original target good (e.g. “Give Directly reduces extinction risk by reducing poverty, a known cause of conflict”); the surprisingly-high magnitude of an incidental impact is what is really catching my attention, because it suggests there are much better ways to do good.
(Apologies in advance I’m rehashing unhelpfully)
The usual cluelessness scenarios are more about that there may be powerful lever for impacting the future, and your intended intervention may be pulling it in the wrong direction (rather than a ‘confirmed discovery’). Say your expectation for the EV of GiveDirectly on conflict has a distribution with a mean of zero but an SD of 10x the magnitude of the benefits you had previously estimated. If it were (e.g.) +10, there’s a natural response of ‘shouldn’t we try something which targets this on purpose?‘; if it were 0, we wouldn’t attend to it further; if it meant you were −10, you wouldn’t give to (now net EV = “-9”) GiveDirectly.
The right response where all three scenarios are credible (plus all the intermediates) but you’re unsure which one you’re in isn’t intuitively obvious (at least to me). Even if (like me) you’re sympathetic to pretty doctrinaire standard EV accounts (i.e. you quantify this uncertainty + all the others and just ‘run the numbers’ and take the best EV) this approach seems to ignore this wide variance, which seems to be worthy of further attention.
The OP tries to reconcile this with the standard approach by saying this indeed often should be attended to, but under the guise of value of information rather than something ‘extra’ to orthodoxy. Even though we should still go with our best guess if we to decide (so expectation neutral but high variance terms ‘cancel out’), we might have the option to postpone our decision and improve our guesswork. Whether to take that option should be governed by how resilient our uncertainty is. If your central estimate of GiveDirectly and conflict would move on average by 2 units if you spent an hour thinking about it, that seems an hour well spent; if you thought you could spend a decade on it and remain where you are, going with the current best guess looks better.
This can be put in plain(er) English (although familiar-to-EA jargon like ‘EV’ may remain). Yet there are reasons to be hesitant about the orthodox approach (even though I think the case in favour is ultimately stronger): besides the usual bullets, we would be kidding ourselves if we ever really had in our head an uncertainty distribution to arbitrary precision, and maybe our uncertainty isn’t even remotely approximate to objects we manipulate in standard models of the same. Or (owed to Andreas) even if so, similar to how rule-consequentialism may be better than act-consequentialism, some other epistemic policy would get better results than applying the orthodox approach in these cases of deep uncertainty.
Insofar as folks are more sympathetic to this, they would not want to be deflationary and perhaps urge investment in new techniques/vocab to grapple with the problem. They may also think we don’t have a good ‘answer’ yet of what to do in these situations, so may hesitate to give ‘accept there’s uncertainty but don’t be paralysed by it’ advice that you and I would. Maybe these issues are an open problem we should try and figure out better before pressing on.
This makes some sense to me, although if that’s all we’re talking about I’d prefer to use plain English since the concept is fairly common. I think this is not all other people are talking about though; see my discussion with MichaelStJules.
FWIW, I don’t think ‘risks’ is quite the right word: sure, if we discover a risk which was so powerful and so tractable that we end up overwhelming the good done by our original intervention, that obviously matters. But the really important thing there, for me at least, is the fact that we apparently have a new and very powerful lever for impacting the world. As a result, I would care just as much about a benefit which in the medium term would end up being worth >>1x the original target good (e.g. “Give Directly reduces extinction risk by reducing poverty, a known cause of conflict”); the surprisingly-high magnitude of an incidental impact is what is really catching my attention, because it suggests there are much better ways to do good.
I think you were trying to draw a distinction, but FWIW this feels structurally similar to the ‘AMF impact population growth/economic growth’ argument to me, and I would give structually the same response: once you truly believe a factory farm net reduced animal suffering via the wild environment it incidentally destroyed, there are presumably much more efficient ways to destroy wild environments. As a result, it appears we can benefit wild animals much more than farmed animals, and ending factory farming should disappear from your priority list, at least as an end goal (it may come back as an itself-incidental consequence of e.g. ‘promoting as much concern for animal welfare as possible’). Is your point just that it does not in fact disappear from people’s priority lists in this case? That I’m not well-placed to observe or comment on either way.
This I agree is a problem. I’m not sure if thinking in terms of cluelessness makes it better or worse; I’ve had a few conversations now where my interlocutor tries to avoid the challenge of cluelessness by presenting an intervention that supposedly has no complex cluelessness attached. So far, I’ve been unconvinced of every case and think said interlocutor is ‘stopping early’ and missing aspects of impact about which they are complexly clueless (often economic/population growth impacts, since it’s actually quite hard to come up with an intervention that doesn’t credibly impact one of those).
I guess I think part of encouraging people to continue thinking rather than stop involves getting people comfortable with the fact that there’s a perfectly reasonable chance that what they end up doing backfires, everything has risk attached, and trying to entirely avoid such is both a fool’s errand and a quick path to analysis paralysis. Currently, my impression is that cluelessness-as-used is pushing towards avoidance rather than acceptance, but the sample size is small and so this opinion is very weakly held. I would be more positive on people thinking about this if it seemed to help push them towards acceptance though.
Point taken. Given that I hope this wasn’t too much of a hijack, or at least was an interesting hijack. I think I misunderstood how literally you intended the statements I quoted and disagreed with in my original comment.
(Apologies in advance I’m rehashing unhelpfully)
The usual cluelessness scenarios are more about that there may be powerful lever for impacting the future, and your intended intervention may be pulling it in the wrong direction (rather than a ‘confirmed discovery’). Say your expectation for the EV of GiveDirectly on conflict has a distribution with a mean of zero but an SD of 10x the magnitude of the benefits you had previously estimated. If it were (e.g.) +10, there’s a natural response of ‘shouldn’t we try something which targets this on purpose?‘; if it were 0, we wouldn’t attend to it further; if it meant you were −10, you wouldn’t give to (now net EV = “-9”) GiveDirectly.
The right response where all three scenarios are credible (plus all the intermediates) but you’re unsure which one you’re in isn’t intuitively obvious (at least to me). Even if (like me) you’re sympathetic to pretty doctrinaire standard EV accounts (i.e. you quantify this uncertainty + all the others and just ‘run the numbers’ and take the best EV) this approach seems to ignore this wide variance, which seems to be worthy of further attention.
The OP tries to reconcile this with the standard approach by saying this indeed often should be attended to, but under the guise of value of information rather than something ‘extra’ to orthodoxy. Even though we should still go with our best guess if we to decide (so expectation neutral but high variance terms ‘cancel out’), we might have the option to postpone our decision and improve our guesswork. Whether to take that option should be governed by how resilient our uncertainty is. If your central estimate of GiveDirectly and conflict would move on average by 2 units if you spent an hour thinking about it, that seems an hour well spent; if you thought you could spend a decade on it and remain where you are, going with the current best guess looks better.
This can be put in plain(er) English (although familiar-to-EA jargon like ‘EV’ may remain). Yet there are reasons to be hesitant about the orthodox approach (even though I think the case in favour is ultimately stronger): besides the usual bullets, we would be kidding ourselves if we ever really had in our head an uncertainty distribution to arbitrary precision, and maybe our uncertainty isn’t even remotely approximate to objects we manipulate in standard models of the same. Or (owed to Andreas) even if so, similar to how rule-consequentialism may be better than act-consequentialism, some other epistemic policy would get better results than applying the orthodox approach in these cases of deep uncertainty.
Insofar as folks are more sympathetic to this, they would not want to be deflationary and perhaps urge investment in new techniques/vocab to grapple with the problem. They may also think we don’t have a good ‘answer’ yet of what to do in these situations, so may hesitate to give ‘accept there’s uncertainty but don’t be paralysed by it’ advice that you and I would. Maybe these issues are an open problem we should try and figure out better before pressing on.