Hi, sorry for the late reply—just got back from vacation.
As with most long posts, I expect this post has whatever popularity it has not because many people read it all, but because they skimmed parts and thought they made sense, and thought the overall message resonated with their own intuitions. Likewise, I expect your comment has whatever popularity it has because they have different intuitions, and because it looks on a skim as though you’ve shown that a careful reading of the post validates those intuitions instead…! But who knows.
Since there are hard-to-quantify considerations both for and against philanthropists being very financially risk tolerant, if your intuitions tend to put more weight on the considerations that point in the pro-risk-tolerance direction, you can certainly read the post and still conclude that a lot of risk tolerance is warranted. E.g. my intuition differs from yours at the top of this comment. As Michael Dickens notes, and as I say in the introduction, I think the post argues on balance against adopting as much financial risk tolerance as existing EA discourse tends to recommend.
Beyond an intuition-based re-weighting of the considerations, though, you raise questions about the qualitative validity of some of the points I raise. And as long as your comment is, I think the post does already address essentially all these questions. (Indeed, addressing them in advance is largely why the post is as long as it is!) For example, regarding “arguments from uncertainty”, you say
I don’t see how this could flatten out the utility function. This should be in “Justifying a more cautious portfolio”.
“one might argue that philanthropists have a hard time distinguishing between the value of different projects, and that this makes the “ex ante philanthropic utility function”, the function from spending to expected impact, less curved than it would be under more complete information…”
Or, in response to my point that “The philanthropic utility function for any given “cause” could exhibit more or less curvature than a typical individual utility function”, you say
I don’t find any argument convincing that philanthropic utility functions are more curved than typical individuals. (As I’ve noted above where you’ve attempted to argue this. This should be in “Justifying a riskier portfolio”, .
Could you point me to what you’re referring to, when you say you note this above? To my mind, one way that a within-cause philanthropic utility function could exhibit arbitrarily more curvature than a typical individual utility function is detailed in Appendix B.
So I can better understand what might be going on with all these evident failures of communication on my end more generally, instead of producing an ever-lengthening series of point by point replies, could you say more about why you don’t feel your questions are answered in these cases?
As Michael Dickens notes, and as I say in the introduction, I think the post argues on balance against adopting as much financial risk tolerance as existing EA discourse tends to recommend.
I appreciate you think that, and I agree that Michael has said he agrees, but I don’t understand why either of you think that. I went point-by-point through your conclusion and it seems clear to me the balance is on more risk taking. I don’t see another way to convince me other than putting the arguments you put forward into each bucket, weighting them and adding them up. Then we can see if the point of disagreement is in the weights or the arguments.
Beyond an intuition-based re-weighting of the considerations,
If you think my weighings and comments about your conclusions relied a little too much on intuituion, I’ll happily spell out those arguments in more detail. Let me know which ones you disagree with and I’ll go into more detail.
I think we might be talking cross purposes here. By flattening here, I meant “less concave”—hence more risk averse. I think we agree on this point?
Could you point me to what you’re referring to, when you say you note this above?
Ah—this is the problem with editing your posts. It’s actually the very last point I make. (And I also made that point at much greater length in an earlier draft. Essentially the utility for any philanthropy is less downward sloping than for an individual, because you can always give to a marginal individual. I agree that you can do more funky things other EA areas, but I don’t find any of the arguments convincing. For example
To my mind, one way that a within-cause philanthropic utility function could exhibit arbitrarily more curvature than a typical individual utility function is detailed in Appendix B.
I just thought this was a totally unrealistic model in multiple dimensions, and don’t really think it’s relevant to anything? I didn’t see it as being any different from me just saying “Imagine a philanthropist with arbitrary utility function which is less more curved than an individual”.
I can’t speak for Michael of course, but as covered throughout the post, I think that the existing EA writing on this topic has internalized the pro-risk-tolerance points (e.g. that some other funding will be coming from uncorrelated sources) quite a bit more than the anti-risk-tolerance points (e.g. that some of the reasons that many investors seem to value safe investments so much, like “habit formation”, could apply to philanthropists to some extent as well). If you feel you and some other EAs have already internalized the latter more than the former, then that’s great too, as far as I’m concerned—hopefully we can come closer to consensus about what the valid considerations are, even if from different directions.
By flattening here, I meant “less concave”—hence more risk averse. I think we agree on this point?
Less concave = more risk tolerant, no?
I think I’m still confused about your response on the second point too. The point of this section is that since there are no good public estimates of the curvature of the philanthropic utility function for many top EA cause areas, like x-risk reduction, we don’t know if it’s more or less concave than a typical individual utility function. Appendix B just illustrates a bit more concretely how it could go either way. Does that make sense?
The point of this section is that since there are no good public estimates of the curvature of the philanthropic utility function for many top EA cause areas, like x-risk reduction, we don’t know if it’s more or less concave than a typical individual utility function. Appendix B just illustrates a bit more concretely how it could go either way. Does that make sense?
No, it doesn’t make sense. “We don’t know the curvature, ergo it could be anything” is not convincing. What you seem to think is “concrete” seems entirely arbitrary to me.
Hold on, just to try wrapping up the first point—if by “flat” you meant “more concave”, why do you say “I don’t see how [uncertainty] could flatten out the utility function. This should be in “Justifying a more cautious portfolio”?”
Did you mean in the original comment to say that you don’t see how uncertainty could make the utility function more concave, and that it should therefore also be filed under “Justifying a riskier portfolio”?
Hi, sorry for the late reply—just got back from vacation.
As with most long posts, I expect this post has whatever popularity it has not because many people read it all, but because they skimmed parts and thought they made sense, and thought the overall message resonated with their own intuitions. Likewise, I expect your comment has whatever popularity it has because they have different intuitions, and because it looks on a skim as though you’ve shown that a careful reading of the post validates those intuitions instead…! But who knows.
Since there are hard-to-quantify considerations both for and against philanthropists being very financially risk tolerant, if your intuitions tend to put more weight on the considerations that point in the pro-risk-tolerance direction, you can certainly read the post and still conclude that a lot of risk tolerance is warranted. E.g. my intuition differs from yours at the top of this comment. As Michael Dickens notes, and as I say in the introduction, I think the post argues on balance against adopting as much financial risk tolerance as existing EA discourse tends to recommend.
Beyond an intuition-based re-weighting of the considerations, though, you raise questions about the qualitative validity of some of the points I raise. And as long as your comment is, I think the post does already address essentially all these questions. (Indeed, addressing them in advance is largely why the post is as long as it is!) For example, regarding “arguments from uncertainty”, you say
But to my mind, the way this flattening could work is explained in the “Arguments from uncertainty” section:
Or, in response to my point that “The philanthropic utility function for any given “cause” could exhibit more or less curvature than a typical individual utility function”, you say
Could you point me to what you’re referring to, when you say you note this above? To my mind, one way that a within-cause philanthropic utility function could exhibit arbitrarily more curvature than a typical individual utility function is detailed in Appendix B.
So I can better understand what might be going on with all these evident failures of communication on my end more generally, instead of producing an ever-lengthening series of point by point replies, could you say more about why you don’t feel your questions are answered in these cases?
I appreciate you think that, and I agree that Michael has said he agrees, but I don’t understand why either of you think that. I went point-by-point through your conclusion and it seems clear to me the balance is on more risk taking. I don’t see another way to convince me other than putting the arguments you put forward into each bucket, weighting them and adding them up. Then we can see if the point of disagreement is in the weights or the arguments.
If you think my weighings and comments about your conclusions relied a little too much on intuituion, I’ll happily spell out those arguments in more detail. Let me know which ones you disagree with and I’ll go into more detail.
I think we might be talking cross purposes here. By flattening here, I meant “less concave”—hence more risk averse. I think we agree on this point?
Ah—this is the problem with editing your posts. It’s actually the very last point I make. (And I also made that point at much greater length in an earlier draft. Essentially the utility for any philanthropy is less downward sloping than for an individual, because you can always give to a marginal individual. I agree that you can do more funky things other EA areas, but I don’t find any of the arguments convincing. For example
I just thought this was a totally unrealistic model in multiple dimensions, and don’t really think it’s relevant to anything? I didn’t see it as being any different from me just saying “Imagine a philanthropist with arbitrary utility function which is less more curved than an individual”.
I can’t speak for Michael of course, but as covered throughout the post, I think that the existing EA writing on this topic has internalized the pro-risk-tolerance points (e.g. that some other funding will be coming from uncorrelated sources) quite a bit more than the anti-risk-tolerance points (e.g. that some of the reasons that many investors seem to value safe investments so much, like “habit formation”, could apply to philanthropists to some extent as well). If you feel you and some other EAs have already internalized the latter more than the former, then that’s great too, as far as I’m concerned—hopefully we can come closer to consensus about what the valid considerations are, even if from different directions.
Less concave = more risk tolerant, no?
I think I’m still confused about your response on the second point too. The point of this section is that since there are no good public estimates of the curvature of the philanthropic utility function for many top EA cause areas, like x-risk reduction, we don’t know if it’s more or less concave than a typical individual utility function. Appendix B just illustrates a bit more concretely how it could go either way. Does that make sense?
Argh, yes. I meant more concave.
No, it doesn’t make sense. “We don’t know the curvature, ergo it could be anything” is not convincing. What you seem to think is “concrete” seems entirely arbitrary to me.
Hold on, just to try wrapping up the first point—if by “flat” you meant “more concave”, why do you say “I don’t see how [uncertainty] could flatten out the utility function. This should be in “Justifying a more cautious portfolio”?”
Did you mean in the original comment to say that you don’t see how uncertainty could make the utility function more concave, and that it should therefore also be filed under “Justifying a riskier portfolio”?