(If you have very unusual moral preferences or empirical beliefs about the world, the specific parameters I chose is less applicable. But the general parameters still hold. Some examples:
If you believe global health is the most important (and arguably only important) cause area, then you would want to reduce your correlation not only with Open Phil but also the Gates foundation and other reasonably effective global health foundations
For all but the very largest donors, I expect you want to maximize your expected returns.
if you care a lot about SFE based wordviews, then you want to reduce your correlation with other SFE based donors.
As I believe there are much less donors in SFE views, even moderately rich (by philantropic standards, say several million in assets) donors may wish to have some risk aversion in their investments, in a way that isn’t true for the above two examples.
If you are the only sizable donor of a cause area and you’re pessimistic you can convince other donors to join in in the next <10 years, you don’t need to coordinate with other donors. I suspect this should mean pretty heavy risk aversion in practice with your investments (like roughly on par with selfish investors), if you believe that there’s substantial diminishing returns to money in your cause area (which seems likely to me).
Great points. You’ve inspired me to look at ways to put more emphasis on these ideas in the discussion section that I haven’t yet added to the model paper.
Developing a stream of the finance literature that further develops and examines ideas from the EA community is one of the underlying goals with these papers. I believe these ideas are valid and interesting enough to attract top research talent. Also, that there is plenty of additional work to do to flesh these ideas out so having more researchers working on these topics would be valuable.
In this context I see these papers are setting out a framework for further work. I could see a paper follow from specifying E(log(EA wealth)) as the utility function then examining the implications. Exactly as you’ve outlined above. It would surely need something more to make it worth a whole academic paper (e.g. examining alternative utility functions, examining relevant empirical data, estimating the size of the altruistic benefits gained by optimizing for this utility versus following a naive/selfish portfolio strategy). I would be excited to see papers like this get written and excited to collaborate on making it happen.
Directly on the points in your comment, I’m curious to what extent you’ve seen these ideas being action guiding in practice? e.g. Are you aware of smaller donors setting up DAFs and taking much more risk than they otherwise would (tax considerations, by the way, are another important thing I’ve abstracted away in my current papers). Are you aware of people specifically taking steps to reduce their correlations with other donors?
As in my papers, I’d split the implications you discussed above into buckets of risk-aversion and mission-correlation. If a smaller donor’s utility depends on log(EA wealth) then of course it makes sense for them to have very little risk aversion in regards to their own wealth. But then they should have the mission-correlation effect of being averse to correlations with major donors. It seems reasonable to me to think of the major donor portfolio as approximately being a global diversified portfolio, i.e. the market (perhaps with some overweights on FB, MSFT, BRK). Just intuitively, I’d say that this means their aversion to market risk should be about equal to what it would be if they were selfish. Which means we’re back to square one of just defaulting to a normal portfolio. That is, the (mission-correlated) risk the altruist sees in most investments will be about equal to the (selfish) market risk most investors see. So their optimal portfolios will be about the same.
Of course, mission-correlated risk aversion could have different implications from normal risk aversion if it is easier to change the covariance of your portfolio with major donors than it is to change the variance of your portfolio. But that’s my point in the above paragraph—the driver of both these variances is going to be your market risk exposure. And quickly reviewing Michael’s post, I’d say all the ideas he mentions are also plausibly good ideas for mainstream investors looking to optimize their portfolios. If this is the case, then we need something more to imply altruists should deviate from following standard, even if advanced, financial advice (e.g. Hauke’s example of crypto could be such a special case, or other investments that are correlated with government policy shifts, or technological shifts that change the altruistic opportunities that are available).
Interested to hear your thoughts on this. I would be particularly excited to see more EA research on a) the expected trajectories of effectiveness over time in different cause areas, and b) the amount of diminishing returns to money in each area. On a), I’d note Founders Pledge has done some good, recent work on this with their Investing to Give and Climate research. Would be great to see more. On b), I think there is tons of thinking out there on this and I feel like it would be great if someone organized this collective wisdom to establish what the current consensus views are (e.g. like ‘global health has low diminishing returns’, ‘AI safety research has relatively high diminishing returns right now’).
Great comment. Related: part of me is glad that EA is so exposed to crypto, because governments are the biggest altruistic actors, and if cryptos valuation is largely due its potential to reduce taxation, it might be a good mission hedge.
(If you have very unusual moral preferences or empirical beliefs about the world, the specific parameters I chose is less applicable. But the general parameters still hold. Some examples:
If you believe global health is the most important (and arguably only important) cause area, then you would want to reduce your correlation not only with Open Phil but also the Gates foundation and other reasonably effective global health foundations
For all but the very largest donors, I expect you want to maximize your expected returns.
if you care a lot about SFE based wordviews, then you want to reduce your correlation with other SFE based donors.
As I believe there are much less donors in SFE views, even moderately rich (by philantropic standards, say several million in assets) donors may wish to have some risk aversion in their investments, in a way that isn’t true for the above two examples.
If you are the only sizable donor of a cause area and you’re pessimistic you can convince other donors to join in in the next <10 years, you don’t need to coordinate with other donors. I suspect this should mean pretty heavy risk aversion in practice with your investments (like roughly on par with selfish investors), if you believe that there’s substantial diminishing returns to money in your cause area (which seems likely to me).
Great points. You’ve inspired me to look at ways to put more emphasis on these ideas in the discussion section that I haven’t yet added to the model paper.
Developing a stream of the finance literature that further develops and examines ideas from the EA community is one of the underlying goals with these papers. I believe these ideas are valid and interesting enough to attract top research talent. Also, that there is plenty of additional work to do to flesh these ideas out so having more researchers working on these topics would be valuable.
In this context I see these papers are setting out a framework for further work. I could see a paper follow from specifying E(log(EA wealth)) as the utility function then examining the implications. Exactly as you’ve outlined above. It would surely need something more to make it worth a whole academic paper (e.g. examining alternative utility functions, examining relevant empirical data, estimating the size of the altruistic benefits gained by optimizing for this utility versus following a naive/selfish portfolio strategy). I would be excited to see papers like this get written and excited to collaborate on making it happen.
Directly on the points in your comment, I’m curious to what extent you’ve seen these ideas being action guiding in practice? e.g. Are you aware of smaller donors setting up DAFs and taking much more risk than they otherwise would (tax considerations, by the way, are another important thing I’ve abstracted away in my current papers). Are you aware of people specifically taking steps to reduce their correlations with other donors?
As in my papers, I’d split the implications you discussed above into buckets of risk-aversion and mission-correlation. If a smaller donor’s utility depends on log(EA wealth) then of course it makes sense for them to have very little risk aversion in regards to their own wealth. But then they should have the mission-correlation effect of being averse to correlations with major donors. It seems reasonable to me to think of the major donor portfolio as approximately being a global diversified portfolio, i.e. the market (perhaps with some overweights on FB, MSFT, BRK). Just intuitively, I’d say that this means their aversion to market risk should be about equal to what it would be if they were selfish. Which means we’re back to square one of just defaulting to a normal portfolio. That is, the (mission-correlated) risk the altruist sees in most investments will be about equal to the (selfish) market risk most investors see. So their optimal portfolios will be about the same.
Of course, mission-correlated risk aversion could have different implications from normal risk aversion if it is easier to change the covariance of your portfolio with major donors than it is to change the variance of your portfolio. But that’s my point in the above paragraph—the driver of both these variances is going to be your market risk exposure. And quickly reviewing Michael’s post, I’d say all the ideas he mentions are also plausibly good ideas for mainstream investors looking to optimize their portfolios. If this is the case, then we need something more to imply altruists should deviate from following standard, even if advanced, financial advice (e.g. Hauke’s example of crypto could be such a special case, or other investments that are correlated with government policy shifts, or technological shifts that change the altruistic opportunities that are available).
Interested to hear your thoughts on this. I would be particularly excited to see more EA research on a) the expected trajectories of effectiveness over time in different cause areas, and b) the amount of diminishing returns to money in each area. On a), I’d note Founders Pledge has done some good, recent work on this with their Investing to Give and Climate research. Would be great to see more. On b), I think there is tons of thinking out there on this and I feel like it would be great if someone organized this collective wisdom to establish what the current consensus views are (e.g. like ‘global health has low diminishing returns’, ‘AI safety research has relatively high diminishing returns right now’).
Great comment. Related: part of me is glad that EA is so exposed to crypto, because governments are the biggest altruistic actors, and if cryptos valuation is largely due its potential to reduce taxation, it might be a good mission hedge.