I think the OP uses the term “risk” to denote only potential outcomes in which an intervention ends up being neutral (and thus the money that was used to fund it ends up being functionally “wasted”). But in the domains of anthropogenic x-risks and meta-EA, many impactful interventions can easily end up being harmful because, for example, they can draw attention to info hazards, produce harmful outreach campaigns, produce dangerous experiments (e.g. in machine learning or virology), shorten AI timelines, intensify competition dynamics among AI labs, etcetera.
In the for-profit world, a limited liability company will generally not be worth to its shareholders less than nothing, even if it ends up causing a lot of harm. Relatedly, the “prospecting for gold” metaphor for EA-motivated hits-based giving is problematic, because it’s impossible to find a negative amount of gold, while it is possible to accidentally increase the chance of an existential catastrophe.
Hi there!
I think the OP uses the term “risk” to denote only potential outcomes in which an intervention ends up being neutral (and thus the money that was used to fund it ends up being functionally “wasted”). But in the domains of anthropogenic x-risks and meta-EA, many impactful interventions can easily end up being harmful because, for example, they can draw attention to info hazards, produce harmful outreach campaigns, produce dangerous experiments (e.g. in machine learning or virology), shorten AI timelines, intensify competition dynamics among AI labs, etcetera.
In the for-profit world, a limited liability company will generally not be worth to its shareholders less than nothing, even if it ends up causing a lot of harm. Relatedly, the “prospecting for gold” metaphor for EA-motivated hits-based giving is problematic, because it’s impossible to find a negative amount of gold, while it is possible to accidentally increase the chance of an existential catastrophe.