As Michael Dickens notes, and as I say in the introduction, I think the post argues on balance against adopting as much financial risk tolerance as existing EA discourse tends to recommend.
I appreciate you think that, and I agree that Michael has said he agrees, but I don’t understand why either of you think that. I went point-by-point through your conclusion and it seems clear to me the balance is on more risk taking. I don’t see another way to convince me other than putting the arguments you put forward into each bucket, weighting them and adding them up. Then we can see if the point of disagreement is in the weights or the arguments.
Beyond an intuition-based re-weighting of the considerations,
If you think my weighings and comments about your conclusions relied a little too much on intuituion, I’ll happily spell out those arguments in more detail. Let me know which ones you disagree with and I’ll go into more detail.
I think we might be talking cross purposes here. By flattening here, I meant “less concave”—hence more risk averse. I think we agree on this point?
Could you point me to what you’re referring to, when you say you note this above?
Ah—this is the problem with editing your posts. It’s actually the very last point I make. (And I also made that point at much greater length in an earlier draft. Essentially the utility for any philanthropy is less downward sloping than for an individual, because you can always give to a marginal individual. I agree that you can do more funky things other EA areas, but I don’t find any of the arguments convincing. For example
To my mind, one way that a within-cause philanthropic utility function could exhibit arbitrarily more curvature than a typical individual utility function is detailed in Appendix B.
I just thought this was a totally unrealistic model in multiple dimensions, and don’t really think it’s relevant to anything? I didn’t see it as being any different from me just saying “Imagine a philanthropist with arbitrary utility function which is less more curved than an individual”.
I can’t speak for Michael of course, but as covered throughout the post, I think that the existing EA writing on this topic has internalized the pro-risk-tolerance points (e.g. that some other funding will be coming from uncorrelated sources) quite a bit more than the anti-risk-tolerance points (e.g. that some of the reasons that many investors seem to value safe investments so much, like “habit formation”, could apply to philanthropists to some extent as well). If you feel you and some other EAs have already internalized the latter more than the former, then that’s great too, as far as I’m concerned—hopefully we can come closer to consensus about what the valid considerations are, even if from different directions.
By flattening here, I meant “less concave”—hence more risk averse. I think we agree on this point?
Less concave = more risk tolerant, no?
I think I’m still confused about your response on the second point too. The point of this section is that since there are no good public estimates of the curvature of the philanthropic utility function for many top EA cause areas, like x-risk reduction, we don’t know if it’s more or less concave than a typical individual utility function. Appendix B just illustrates a bit more concretely how it could go either way. Does that make sense?
The point of this section is that since there are no good public estimates of the curvature of the philanthropic utility function for many top EA cause areas, like x-risk reduction, we don’t know if it’s more or less concave than a typical individual utility function. Appendix B just illustrates a bit more concretely how it could go either way. Does that make sense?
No, it doesn’t make sense. “We don’t know the curvature, ergo it could be anything” is not convincing. What you seem to think is “concrete” seems entirely arbitrary to me.
Hold on, just to try wrapping up the first point—if by “flat” you meant “more concave”, why do you say “I don’t see how [uncertainty] could flatten out the utility function. This should be in “Justifying a more cautious portfolio”?”
Did you mean in the original comment to say that you don’t see how uncertainty could make the utility function more concave, and that it should therefore also be filed under “Justifying a riskier portfolio”?
I appreciate you think that, and I agree that Michael has said he agrees, but I don’t understand why either of you think that. I went point-by-point through your conclusion and it seems clear to me the balance is on more risk taking. I don’t see another way to convince me other than putting the arguments you put forward into each bucket, weighting them and adding them up. Then we can see if the point of disagreement is in the weights or the arguments.
If you think my weighings and comments about your conclusions relied a little too much on intuituion, I’ll happily spell out those arguments in more detail. Let me know which ones you disagree with and I’ll go into more detail.
I think we might be talking cross purposes here. By flattening here, I meant “less concave”—hence more risk averse. I think we agree on this point?
Ah—this is the problem with editing your posts. It’s actually the very last point I make. (And I also made that point at much greater length in an earlier draft. Essentially the utility for any philanthropy is less downward sloping than for an individual, because you can always give to a marginal individual. I agree that you can do more funky things other EA areas, but I don’t find any of the arguments convincing. For example
I just thought this was a totally unrealistic model in multiple dimensions, and don’t really think it’s relevant to anything? I didn’t see it as being any different from me just saying “Imagine a philanthropist with arbitrary utility function which is less more curved than an individual”.
I can’t speak for Michael of course, but as covered throughout the post, I think that the existing EA writing on this topic has internalized the pro-risk-tolerance points (e.g. that some other funding will be coming from uncorrelated sources) quite a bit more than the anti-risk-tolerance points (e.g. that some of the reasons that many investors seem to value safe investments so much, like “habit formation”, could apply to philanthropists to some extent as well). If you feel you and some other EAs have already internalized the latter more than the former, then that’s great too, as far as I’m concerned—hopefully we can come closer to consensus about what the valid considerations are, even if from different directions.
Less concave = more risk tolerant, no?
I think I’m still confused about your response on the second point too. The point of this section is that since there are no good public estimates of the curvature of the philanthropic utility function for many top EA cause areas, like x-risk reduction, we don’t know if it’s more or less concave than a typical individual utility function. Appendix B just illustrates a bit more concretely how it could go either way. Does that make sense?
Argh, yes. I meant more concave.
No, it doesn’t make sense. “We don’t know the curvature, ergo it could be anything” is not convincing. What you seem to think is “concrete” seems entirely arbitrary to me.
Hold on, just to try wrapping up the first point—if by “flat” you meant “more concave”, why do you say “I don’t see how [uncertainty] could flatten out the utility function. This should be in “Justifying a more cautious portfolio”?”
Did you mean in the original comment to say that you don’t see how uncertainty could make the utility function more concave, and that it should therefore also be filed under “Justifying a riskier portfolio”?