Re bonus section: Note that we are (hopefully) taking expectations over our estimates for importance, neglectedness and tractability, such that general correlations between the factors between causes do not necessarily cause a problem. However, it seems quite plausible that our estimation errors are often correlated because of things like the halo effect.
Edit: I do not fully endorse this comment any more, but still belief that the way we model the estimation procedure matters here. Will edit again, once I am less confused.
Maybe one example could be that we don’t know the exact Scale of wild animal suffering, in part because we aren’t sure which animals are actually sentient, and if it does turn out that many more animals are sentient than expected, that might mean that relative progress on the problem is harder. It could actually be the opposite, though; if we think we could get more cost-effective methods to address wild invertebrate suffering than for for wild vertebrate suffering (invertebrates are generally believed to be less (likely to be) sentient than vertebrates, with a few exceptions), then the Scale and Solvability might be positively correlated.
Similarly, there could be a relationship between the Scale of a global catastrophic risk or x-risk and its Solvability. If advanced AI can cause value lock-in, how long the effects last might be related to how difficult it is to make relative progress on aligning AI, and more generally, how powerful AI will be is probably related to both the Scale and Solvability of the problem. How bad climate change or a nuclear war could be might be related to its Solvability, too, if worse risks are relatively more or less difficult to make progress on.
Re bonus section: Note that we are (hopefully) taking expectations over our estimates for importance, neglectedness and tractability, such that general correlations between the factors between causes do not necessarily cause a problem. However, it seems quite plausible that our estimation errors are often correlated because of things like the halo effect.
Edit: I do not fully endorse this comment any more, but still belief that the way we model the estimation procedure matters here. Will edit again, once I am less confused.
Maybe one example could be that we don’t know the exact Scale of wild animal suffering, in part because we aren’t sure which animals are actually sentient, and if it does turn out that many more animals are sentient than expected, that might mean that relative progress on the problem is harder. It could actually be the opposite, though; if we think we could get more cost-effective methods to address wild invertebrate suffering than for for wild vertebrate suffering (invertebrates are generally believed to be less (likely to be) sentient than vertebrates, with a few exceptions), then the Scale and Solvability might be positively correlated.
Similarly, there could be a relationship between the Scale of a global catastrophic risk or x-risk and its Solvability. If advanced AI can cause value lock-in, how long the effects last might be related to how difficult it is to make relative progress on aligning AI, and more generally, how powerful AI will be is probably related to both the Scale and Solvability of the problem. How bad climate change or a nuclear war could be might be related to its Solvability, too, if worse risks are relatively more or less difficult to make progress on.