(quick reply to a private doc on interaction effects vs direct effects for existential risks / GCR. They’re arguing for more of a focus on interaction effects overall, I’m arguing for mostly work on direct effects. Keeping for my notes.)
In addition to direct effects from AI, bio, nuclear, climate...
...there are also mitigating / interaction effects, which could make these direct effects better or worse. For each of the possible direct risks, mitigating / interaction effects are more or less important.
For AI, the mainline direct risks that are possibly existential (whether or not the risks occur) are both really bad and also roughly independent of things that don’t reduce the risk of AGI being developed, and it’s possible to work directly on the direct risks (technical and governance). e.g. Misinformation or disinformation won’t particularly matter if we get an unaligned AGI developed at one of the big companies, except to the extent that mis/disinformation contributed to the unaligned AGI getting developed (which I think is probably more influenced by other factors, but it’s debatable), so I think efforts should be focused on solving the technical alignment problem. Misinformation and disinformation are more relevant to the governance problem, but working on them still seems worse to me than a focus on specifically trying to make AI governance go well, given the goal of trying to reduce existential risk (and not solve other important problems associated with misinformation and disinformation). (I expect one will make more progress trying to directly reduce xrisk from AI than working on related things.)
(Bio has excellent interaction effects with AI in terms of risk though, so that’s going to be one of the best examples. )
Just to quickly go over my intuitions here about the interactions:
AI x bio <-- re: AI, I think AI direct effects are worse AI x nuclear <-- these are pretty separate problems imo AI x climate <-- could go either way; I expect AI could have substantial improvements on climate depending on how advanced our AI gets. AI doesn’t contribute that much to climate change compared to other factors I think Bio x AI <-- bio is SO MUCH WORSE with AI, this is an important interaction effect Bio x nuclear <-- these are pretty separate problems imo Bio x climate <-- worse climate will make pandemics worse for sure Nuclear x AI <-- separate problems Nuclear x bio <-- separate problems Nuclear x climate <-- if climate influences war then climate makes nuclear worse Climate x AI <-- could go either way I think but I think probably best to work directly on climate if you don’t think we’ll get advanced AI systems soon Climate x nuclear <-- nuclear stuff certainly does mess up climate a LOT, but then we’re thinking more about nuclear Climate x bio <-- pandemics don’t influence climate that much I think
----
Feedback from another EA (thank you!)
> I think there are more interaction effects than your shortform is implying, but also most lines of inquiry aren’t very productive. [Agree in the general direction, but object-level disagree]
I think this is true and if presented arguments I’d agree with them and have a fairer / more comprehensive picture.
(quick reply to a private doc on interaction effects vs direct effects for existential risks / GCR. They’re arguing for more of a focus on interaction effects overall, I’m arguing for mostly work on direct effects. Keeping for my notes.)
In addition to direct effects from AI, bio, nuclear, climate...
...there are also mitigating / interaction effects, which could make these direct effects better or worse. For each of the possible direct risks, mitigating / interaction effects are more or less important.
For AI, the mainline direct risks that are possibly existential (whether or not the risks occur) are both really bad and also roughly independent of things that don’t reduce the risk of AGI being developed, and it’s possible to work directly on the direct risks (technical and governance). e.g. Misinformation or disinformation won’t particularly matter if we get an unaligned AGI developed at one of the big companies, except to the extent that mis/disinformation contributed to the unaligned AGI getting developed (which I think is probably more influenced by other factors, but it’s debatable), so I think efforts should be focused on solving the technical alignment problem. Misinformation and disinformation are more relevant to the governance problem, but working on them still seems worse to me than a focus on specifically trying to make AI governance go well, given the goal of trying to reduce existential risk (and not solve other important problems associated with misinformation and disinformation). (I expect one will make more progress trying to directly reduce xrisk from AI than working on related things.)
(Bio has excellent interaction effects with AI in terms of risk though, so that’s going to be one of the best examples. )
Just to quickly go over my intuitions here about the interactions:
AI x bio <-- re: AI, I think AI direct effects are worse
AI x nuclear <-- these are pretty separate problems imo
AI x climate <-- could go either way; I expect AI could have substantial improvements on climate depending on how advanced our AI gets. AI doesn’t contribute that much to climate change compared to other factors I think
Bio x AI <-- bio is SO MUCH WORSE with AI, this is an important interaction effect
Bio x nuclear <-- these are pretty separate problems imo
Bio x climate <-- worse climate will make pandemics worse for sure
Nuclear x AI <-- separate problems
Nuclear x bio <-- separate problems
Nuclear x climate <-- if climate influences war then climate makes nuclear worse
Climate x AI <-- could go either way I think but I think probably best to work directly on climate if you don’t think we’ll get advanced AI systems soon
Climate x nuclear <-- nuclear stuff certainly does mess up climate a LOT, but then we’re thinking more about nuclear
Climate x bio <-- pandemics don’t influence climate that much I think
----
Feedback from another EA (thank you!)
> I think there are more interaction effects than your shortform is implying, but also most lines of inquiry aren’t very productive. [Agree in the general direction, but object-level disagree]
I think this is true and if presented arguments I’d agree with them and have a fairer / more comprehensive picture.