I feel there’s a bit of tension in you stating that “I don’t think we should sidestep the philosophical aspect of this debate” while later concluding that “Worldview diversification is a useful and practical way for the EA community to make decisions.”
I say the former as a justification to avoid making an assumption (diminishing returns to money across causes) that would automatically support a balanced allocation of money without any other normative judgments. But I personally place high premium on decisions being “robustly” good so I do see worldview diversification as a useful and practical way to make decisions (to someone who places a premium on robustness).
In economics we’re used to treating basically any functional form for utility as permissible, so this is somewhat strange, but here we’re thinking about normative ethics rather than consumption choices.
I appreciate the push, since I didn’t really mount a defense of risk aversion in the post. I don’t really have a great interest in doing so. For one thing, I am axiomatically risk-averse and I don’t put that belief up for debate. Risk aversion leads to the unpalatable conclusion that marginal lives are less worth saving, as you point out. But risk neutrality leads to the St Petersburg paradox. Both of them are slightly-contrived scenarios but not so contrived that I can easily dismiss them as irrelevant edge cases. I don’t have solutions in mind (the papers you linked look interesting, but I find them hard to parse). So I don’t feel passionately about arguing the case for risk-averse decisionmaking, but I still believe in it.
In reality I don’t think anyone who practices worldview diversification (allocating resources across causes in a way that’s inconsistent with any single worldview) actually places a really high premium on tight philosophical defenses of it. (See the quote at the start of the post!) I wrote this more for my own fun.
I understand you don’t want to debate risk attitudes, but I hope it’s alright that I try to expand on my thought just a bit to make sure I get it accross well—no need to respond.
To be clear: I think risk aversion is entirely fine. My utility in apples is concave, of course. That’s not really up for ‘debate’. Likewise for other consumption preferences.
But ethics seems different. Philosophers debate what’s permissible, mandatory, etc. in the context of ethics (not so much in the context of consumption). The EA enterprise is partly a result of this.
And choosing between uncertain altruistic interventions is of course in part a problem of ethics. Risk preferences w.r.t. wellbeing in the world make moral recommendations independently of empirical facts. This is why I see them as more up for debate. (Here’s a great overview of such debates.)
We often argue about the mertis of ethical views under certainty: should our social welfare function concavify individual utilities before adding them up (prioritarianism) or not (utilitarianism)? Similarly, under uncertainty, we may ask: should our social welfare function concavify the sum of individual utilities (moral risk aversion) or not (moral risk neutrality)?
These are the sorts of questions I meant were relevant; I agree risk aversion per se is completely unproblematic.
By the way, this is irrelevant to the methodological point above, but I’ll point out the interesting fact that risk aversion alone doesn’t get rid of the problem of the St Petersburg paradox:
A (1/2)n chance of winning £2n with linear utility: ∑∞n=1(12)n×2n=∞.
A (1/2)n chance of winning winning £22n with log utility: ∑∞n=1(12)n×ln(22n)=ln(2)∑∞n=1(12)n×2n=∞.
I don’t mean to say that risk preferences in general are unimpeachable and beyond debate. I was only saying that I personally do not put my risk preferences up for debate, nor do I try to convince others about their risk preferences.
In any debate about different approaches to ethics, I place a lot of weight on intuitionism as a way to resolve debates. Considering the implications of different viewpoints for what I would have to accept is the way I decide what I value. I do not place a lot of weight on whether I can refute the internal logic of any viewpoint.
Great points!
I say the former as a justification to avoid making an assumption (diminishing returns to money across causes) that would automatically support a balanced allocation of money without any other normative judgments. But I personally place high premium on decisions being “robustly” good so I do see worldview diversification as a useful and practical way to make decisions (to someone who places a premium on robustness).
I appreciate the push, since I didn’t really mount a defense of risk aversion in the post. I don’t really have a great interest in doing so. For one thing, I am axiomatically risk-averse and I don’t put that belief up for debate. Risk aversion leads to the unpalatable conclusion that marginal lives are less worth saving, as you point out. But risk neutrality leads to the St Petersburg paradox. Both of them are slightly-contrived scenarios but not so contrived that I can easily dismiss them as irrelevant edge cases. I don’t have solutions in mind (the papers you linked look interesting, but I find them hard to parse). So I don’t feel passionately about arguing the case for risk-averse decisionmaking, but I still believe in it.
In reality I don’t think anyone who practices worldview diversification (allocating resources across causes in a way that’s inconsistent with any single worldview) actually places a really high premium on tight philosophical defenses of it. (See the quote at the start of the post!) I wrote this more for my own fun.
Thanks for the thoughtful reply!
I understand you don’t want to debate risk attitudes, but I hope it’s alright that I try to expand on my thought just a bit to make sure I get it accross well—no need to respond.
To be clear: I think risk aversion is entirely fine. My utility in apples is concave, of course. That’s not really up for ‘debate’. Likewise for other consumption preferences.
But ethics seems different. Philosophers debate what’s permissible, mandatory, etc. in the context of ethics (not so much in the context of consumption). The EA enterprise is partly a result of this.
And choosing between uncertain altruistic interventions is of course in part a problem of ethics. Risk preferences w.r.t. wellbeing in the world make moral recommendations independently of empirical facts. This is why I see them as more up for debate. (Here’s a great overview of such debates.)
We often argue about the mertis of ethical views under certainty: should our social welfare function concavify individual utilities before adding them up (prioritarianism) or not (utilitarianism)? Similarly, under uncertainty, we may ask: should our social welfare function concavify the sum of individual utilities (moral risk aversion) or not (moral risk neutrality)?
These are the sorts of questions I meant were relevant; I agree risk aversion per se is completely unproblematic.
By the way, this is irrelevant to the methodological point above, but I’ll point out the interesting fact that risk aversion alone doesn’t get rid of the problem of the St Petersburg paradox:
A (1/2)n chance of winning £2n with linear utility:
∑∞n=1(12)n×2n=∞.
A (1/2)n chance of winning winning £22n with log utility: ∑∞n=1(12)n×ln(22n)=ln(2)∑∞n=1(12)n×2n=∞.
I don’t mean to say that risk preferences in general are unimpeachable and beyond debate. I was only saying that I personally do not put my risk preferences up for debate, nor do I try to convince others about their risk preferences.
In any debate about different approaches to ethics, I place a lot of weight on intuitionism as a way to resolve debates. Considering the implications of different viewpoints for what I would have to accept is the way I decide what I value. I do not place a lot of weight on whether I can refute the internal logic of any viewpoint.