So for example, if every year the AI risk worldview gets more and more alarmed, then it might “borrow” more and more money from the factory farming worldview, with the promise to pay back whenever it starts getting less alarmed. But the whole point of doing the bucketing in the first place is so that the factory farming worldview can protect itself from the possibility of the AI risk worldview being totally wrong/unhinged, and so you can’t assume that the AI risk worldview is just as likely to update down as to update upwards.
Suppose that as the AI risk worldview becomes more alarmed, you are paying more and more units of x-risk prevention (according to the AI risk worldview) for every additional farmed animal QALY (as estimated by the farmed animal worldview). I find that very unappealing.
I agree that this, and your other comment below, both describe unappealing features of the current setup. I’m just pointing out that in fact there are unappealing outcomes all over the place, and that just because the equilibrium we’ve landed on has some unappealing properties doesn’t mean that it’s the wrong equilibrium. Specifically, the more you move towards pure maximization, the more you run into these problems; and as Holden points out, I don’t think you can get out of them just by saying “let’s maximize correctly”.
(You might say: why not a middle ground between “fixed buckets” and “pure utility maximization”? But note that having a few buckets chosen based on standard cause prioritization reasoning is already a middle ground between pure utility maximization and the mainstream approach to charity, which does way less cause prioritization.)
I think the post from Holden that you point to isn’t really enough to go from “we think really hardcore estimation is perilous” to “we should do worldview diversification”. Worldview diversification is fairly specific, and there are other ways that you could rein-in optimization even if you don’t have worldviews, e.g., adhering to deontological constraints, reducing “intenseness”, giving good salaries to employees, and so on.
Suppose that as the AI risk worldview becomes more alarmed, you are paying more and more units of x-risk prevention (according to the AI risk worldview) for every additional farmed animal QALY (as estimated by the farmed animal worldview). I find that very unappealing.
I agree that this, and your other comment below, both describe unappealing features of the current setup. I’m just pointing out that in fact there are unappealing outcomes all over the place, and that just because the equilibrium we’ve landed on has some unappealing properties doesn’t mean that it’s the wrong equilibrium. Specifically, the more you move towards pure maximization, the more you run into these problems; and as Holden points out, I don’t think you can get out of them just by saying “let’s maximize correctly”.
(You might say: why not a middle ground between “fixed buckets” and “pure utility maximization”? But note that having a few buckets chosen based on standard cause prioritization reasoning is already a middle ground between pure utility maximization and the mainstream approach to charity, which does way less cause prioritization.)
I think the post from Holden that you point to isn’t really enough to go from “we think really hardcore estimation is perilous” to “we should do worldview diversification”. Worldview diversification is fairly specific, and there are other ways that you could rein-in optimization even if you don’t have worldviews, e.g., adhering to deontological constraints, reducing “intenseness”, giving good salaries to employees, and so on.