Cluelessness can be another reason something is intractable. For example, effects on wild animal are really complicated, especially when population sizes change, and we know little about animals’ welfare and have considerable uncertainty about their moral weights. As such, it’s hard to know whether a given intervention is net positive or net negative in expectation. However, little has been spent on understanding the welfare of wild animals or their moral weights, so maybe this deep uncertainty is not unresolvable.
Cluelessness can be another reason something is intractable. For example, effects on wild animal are really complicated, especially when population sizes change, and we know little about animals’ welfare and have considerable uncertainty about their moral weights. As such, it’s hard to know whether a given intervention is net positive or net negative in expectation. However, little has been spent on understanding the welfare of wild animals or their moral weights, so maybe this deep uncertainty is not unresolvable.
Luke Muehlhauser also said that almost all AI (governance) interventions he looks into are as likely to be net negative as they are to be net positive: https://forum.effectivealtruism.org/posts/pxALB46SEkwNbfiNS/the-motivated-reasoning-critique-of-effective-altruism?commentId=6yFEBSgDiAfGHHKTD