I think you’re not quite engaging with Johan’s argument for the necessity of worldview diversification if you assume it’s primarily about risk reduction or or diminishing returns. My reading of their key point is that we don’t just have uncertainty about outcomes (risk), but uncertainty about the moral frameworks by which we evaluate those outcomes, combined with deep uncertainty about long-term consequences (complex cluelessness), leading to fundamental uncertainty in our ability to calculate expected value at all (even if we hypothetically want to as EV-maximisers, itself a perilous strategy), and it’s these factors that make them think worldview diversification can be the right approach even at the individual level.
Mo, thank you for chiming in. Yes, you understood the key point, and you summarised it very well! In my reply to Jan, I expanded on your point about why I think calculating the expected value is not possible for AI safety. Feel free to check it out.
I am curious, though: do you disagree with the idea that a worldview diversification approach at an individual level is the preferred strategy? You understood my point, but how true do you think it is?
I think you’re not quite engaging with Johan’s argument for the necessity of worldview diversification if you assume it’s primarily about risk reduction or or diminishing returns. My reading of their key point is that we don’t just have uncertainty about outcomes (risk), but uncertainty about the moral frameworks by which we evaluate those outcomes, combined with deep uncertainty about long-term consequences (complex cluelessness), leading to fundamental uncertainty in our ability to calculate expected value at all (even if we hypothetically want to as EV-maximisers, itself a perilous strategy), and it’s these factors that make them think worldview diversification can be the right approach even at the individual level.
Mo, thank you for chiming in. Yes, you understood the key point, and you summarised it very well! In my reply to Jan, I expanded on your point about why I think calculating the expected value is not possible for AI safety. Feel free to check it out.
I am curious, though: do you disagree with the idea that a worldview diversification approach at an individual level is the preferred strategy? You understood my point, but how true do you think it is?