I’m presenting a talk at EAG NYC next month on the topic of indirect effects. Not to spoil the talk too much, but the broad theme is that wild animal welfare and longtermism are in similar epistemic positions with regard to (different types of) indirect effects, and that it could be instructive to compare how the two different communities approach uncertainty about these effects.
By “indirect effects” I mean: the morally-relevant effects that your act has on the world beyond its intended consequences. You might have also seen these called cascade effects, network effects, or Nth order effects. For example, in wild animal welfare we might contrast the intended effect of food provisioning (improved welfare for animals fed who are now less hungry) with the indirect effects (anything from crowding at food sites leading to aggressive interactions and increased disease exchange to complex and hard-to-predict effects of a potentially increasing population size).
To try to construct a longtermist example, noting that this is not my area of expertise, one might compare the direct effects of passing AI safety regulation (e.g., slow down the development of novel technologies and decrease the likelihood a dictatorship uses AI to lock in its regime for centuries) with some potential indirect effects (e.g., increases the timeline to AI solving some kind of major human problem, like a new treatment for a disease).
Since my experience is almost entirely in the wild animal welfare context, I would like to crowdsource some examples illustrating different ways folks working on AI or GCRs think about indirect effects, or how theorists of longtermism have suggested uncertainty about these effects be treated. Some examples of resources I’d be interested are posts/​websites/​papers addressing anything like:
How do indirect effects get incorporated into cost effectiveness calculations, for example, if anyone is doing cost-effectiveness calculations?
When do indirect effects get to be treated as irrelevant for cluelessness reasons, and when do they not?
Which organizations’ theories of change (if any) have explicitly tried to account for indirect effects, or selected approaches they think minimize unintended consequences?
I’m not totally unaware of the space; I have discussed this topic with friends who work on AI and GCRs—I just want to ensure I’m not missing any really interesting work on the topic from outside my network.
Note: This question is focused on sourcing examples of how these ideas are handled in the longtermist community; I won’t engage with comments on the similarity or otherwise of the two categories of indirect effects for now :)
[Question] Indirect effects in longtermism
Link post
I’m presenting a talk at EAG NYC next month on the topic of indirect effects. Not to spoil the talk too much, but the broad theme is that wild animal welfare and longtermism are in similar epistemic positions with regard to (different types of) indirect effects, and that it could be instructive to compare how the two different communities approach uncertainty about these effects.
By “indirect effects” I mean: the morally-relevant effects that your act has on the world beyond its intended consequences. You might have also seen these called cascade effects, network effects, or Nth order effects. For example, in wild animal welfare we might contrast the intended effect of food provisioning (improved welfare for animals fed who are now less hungry) with the indirect effects (anything from crowding at food sites leading to aggressive interactions and increased disease exchange to complex and hard-to-predict effects of a potentially increasing population size).
To try to construct a longtermist example, noting that this is not my area of expertise, one might compare the direct effects of passing AI safety regulation (e.g., slow down the development of novel technologies and decrease the likelihood a dictatorship uses AI to lock in its regime for centuries) with some potential indirect effects (e.g., increases the timeline to AI solving some kind of major human problem, like a new treatment for a disease).
Since my experience is almost entirely in the wild animal welfare context, I would like to crowdsource some examples illustrating different ways folks working on AI or GCRs think about indirect effects, or how theorists of longtermism have suggested uncertainty about these effects be treated. Some examples of resources I’d be interested are posts/​websites/​papers addressing anything like:
How do indirect effects get incorporated into cost effectiveness calculations, for example, if anyone is doing cost-effectiveness calculations?
When do indirect effects get to be treated as irrelevant for cluelessness reasons, and when do they not?
Which organizations’ theories of change (if any) have explicitly tried to account for indirect effects, or selected approaches they think minimize unintended consequences?
I’m not totally unaware of the space; I have discussed this topic with friends who work on AI and GCRs—I just want to ensure I’m not missing any really interesting work on the topic from outside my network.
Note: This question is focused on sourcing examples of how these ideas are handled in the longtermist community; I won’t engage with comments on the similarity or otherwise of the two categories of indirect effects for now :)