I appreciate many important points in this essay about the additional considerations for altruistic investing, including taking more risk for return than normal because of lower philanthropic risk aversion, attending to correlations with other donors (current and future), and the variations in diminishing returns curve for different causes and interventions popular in effective altruism. I think there are very large gains that could be attained by effective altruists better making use of these considerations.
But at the same time I am quite disconcerted by the strong forward-looking EMH violating claims about massively outsized returns to the specific investment strategies, despite the limited disclaimers (and the factor literature). Concern for relative performance only goes so far as an explanation for predicting such strong inefficiencies going forward: the analysis would seem to predict that, e.g. very wealthy individuals investing on their own accounts will pile into such strategies if their advantage is discernible. I would be much less willing to share the article because of the inclusion of those elements.
I would also add some of the special anti-risk considerations for altruists and financial writing directed at them, e.g.
Investment blowups that cause personal financial difficulties attributed to effective altruism having fallout on others
Investment writings taken as advice damaging especially valuable assets that would otherwise be used for altruism (just the flip side of the value of improvements)
Relative underperformance (especially blame-drawing) of the broad EA portfolio contributing to weaker reputation for effective altruism, interacting negatively with the very valuable asset (much of the EA portfolio) of entry of new altruists
On net, it still looks like EA should be taking much more risk than is commonly recommended for individual retirement investments, and I’d like to see active development of this sort of thinking, but want to emphasize the importance of caution and rigor in doing so.
I’m not surprised that this is a sticking point. I think people should be highly skeptical of claims about EMH violations, and I didn’t present a lot of evidence, because that would have dramatically increased the length of this essay. I would refer to the sources I linked in the relevant section.
If you assume EMH is broadly true, almost all of the rest of the essay still applies. When I presented some rough estimates for target leverage under full Kelly and half Kelly, I gave estimates for the global market portfolio, as well as for value/momentum/managed futures. The main exception is if EMH is fully true, you might not want to invest in zero-correlation assets because there aren’t really any with positive expected real return (AFAIK), but you’d still want to seek out low correlation to the extent that it’s possible.
Regarding the claims I made about potential future performance:
The claims on value and momentum are consistent with academic factor literature and estimates from evidence-focused investing firms like Research Affiliates.
I didn’t make any strong claims about managed futures performance, but the made-up numbers I gave are far lower than the theoretical backtest performance from the AQR paper I cited, as well as actual performance from practitioners such as Richard Dennis.
Investment writings taken as advice damaging especially valuable assets that would otherwise be used for altruism (just the flip side of the value of improvements)
I don’t understand what this means, could you explain?
EDIT: I did a little more looking and found these two papers that might be persuasive to value and momentum skeptics:
I agree that the EMH-consistent version of this still suggests the collective EA portfolio should be more leveraged, and more managing correlations across donors now and over time, and that there is a large factor literature in support of these factors (although in general academic finance suffers from datamining/backtesting and EMH dissipation of real factors that become known).
Re the text you quoted, I just mean that if EAs damage their portfolios (e.g. by taking large amounts of leverage and not properly monitoring it so that leverage ratios explode, taking out the portfolio) that’s fewer EA dollars donated (aside from the reputational effects), and I would want to do more to ensure readers don’t go half-cocked and blow up their portfolios without really knowing what they are doing.
Leveraged ETFs are one way to keep your leverage ratio from blowing up, without any investor effort.
Keeping all the considerations in this post in mind seems very difficult, so perhaps the ideal solution would be if there were an institution to do it for individuals, such as EA Funds or something like it. You could donate to the fund and let them adjust leverage, correlation with other donors to the same cause, and everything else on your behalf.
I like this idea—a centrally-managed fund would be a lot easier than a bunch of people separately doing their own thing. But it creates a problem where the investors/donors in the fund might have unrealistic expectations about performance and could become really unhappy if the fund underperforms the S&P for several consecutive years—which is bound to happen sometimes if the fund is aiming for low correlation. This would be particularly bad from an optics perspective. So there are pros and cons to this idea.
Good point. I think such a fund would want to be very clear that it’s not for the faint of heart and that it’s done in the spirit of trying new risky things. If that message was front and center, I expect the backlash would be less.
Thanks! From my reading of the post, that critique is not really specific to leveraged ETFs? Volatility drag is inherent to leverage in general (and even to non-leveraged investing to a smaller degree).
He says: “In my next post, I’m going to dive into more detail on what is to distinguish between good and bad uses of leverage.” So I found his next post on leverage, which coincidentally is one mentioned in the OP: “The Line Between Aggressive and Crazy”. There he clarifies why he doesn’t like leveraged ETFs:
From this we start to see the problem with levered ETFs as they are currently constructed: they generally use too much leverage applied to too volatile of assets. Even with the plain vanilla S&P 500 3x leverage is too much. And after accounting for the hefty transactions costs and management fees these ETFs charge, even 2x might be suboptimal (especially if you believe returns will be lower in the future than they have in recent decades). And the S&P 500 is one of the most conservative targets for these products. Take a look at the websites of levered ETF providers and you will see ways to make levered bets on particular industries like biotech or the energy sector, or on commodities like oil and gold, or for more esoteric instruments yet, almost all of which are more volatile than a broadly diversified index like the S&P 500, and thus supporting much lower Kelly leverage ratios, probably less than 2x.
So unless transaction costs are a dealbreaker, it seems like he’s mainly opposed to the fact that most leveraged ETFs use too much leverage for their level of volatility (relative to the Kelly Criterion, which assumes logarithmic utility of wealth), not that the instrument itself is flawed? Of course, leveraged ETFs implement a “constant leverage” strategy, and later in that post, Davis proposes adjusting the leverage ratio dynamically (which I agree is better, though it requires more work).
Hmm. Maybe you’re right. I guess I was thinking there was an important difference between “constant leverage” and infrequent rebalancing. But I guess that’s a more complicated subject.
That seems to be a common view, but I haven’t yet been able to find any reason why that would be the case, except insofar as rebalancing frequency affects how leveraged you are. I discussed the topic a bit here. Maybe someone who knows more about the issue can correct me.
Can you clarify what exactly your concern is? Which of these best describes your position?
Factor premia basically don’t exist.
Factor premia are much smaller than this essay claims.
Factor premia are substantial, but this essay does not do enough to justify that claim, so it’s not a good reference.
Initially I thought you were saying #1, but from your reply below, it sounds like you at least believe that factor premia exist, so now I’m not sure.
Edit: Or I guess a fourth option: factor premia exist, but generally should not be promoted (possibly because most readers will run into the same behavioral biases that cause the premia to exist).
I just updated the essay to include some more justification for (1) the claim that it’s possible to beat the market and (2) the specific return estimates given. I added the new material to the sections “Improving on conventional wisdom” and “Return expectations”, and have reproduced the latter below.
-------
As something of a corroboration, RAFI provides estimates of forward-looking five-year return for various long/short factors. At the time of this writing, it makes the following predictions for its long/short value and momentum factors (net of transaction costs):
5.7% for US large-cap value
1.1% for US large-cap momentum
8.0% for US small-cap value
6.4% for US small-cap momentum
(RAFI’s projections for foreign developed market factors are similar but generally a bit higher.)
A concentrated long-only portfolio on a particular factor would have approximately the same expected return as the long/short factor plus the broad market (although that’s not quite how the math works).
The underlying indexes used by VMOT make some improvements on RAFI’s simple factor model (see Quantitative Value and Quantitative Momentum for details[^26]), so it might be reasonable to assume a higher expected return for VMOT. If we then subtract fees, we get something close to the original estimate I gave for VMOT (probably a bit higher[^27]).
RAFI believes the value and momentum premia will work as well in the future as they have in the past, and AQR makes the same claim in some of the papers I linked above. They offer good support for this claim, but in the interest of conservatism, we could justifiably subtract a couple of percentage points from expected return to account for premium degradation.
Note that RAFI’s estimates use factor timing—attempting to guess how well factors will perform based on the current market environment, rather than just looking at historical behavior. This practice is not widely accepted; for example, see AQR’s Factor Timing is Deceptively Difficult.
Also note that these numbers only give expected mean return. Even if these estimates are accurate, we could still see much higher or lower returns due to market volatility.
(Dashing this off quickly, so some of this may be inelegantly stated.)
I appreciate this response, and I think that elements of it apply to many other risks an EA could take, including business ventures and work on charitable causes that may be high-return but carry a significant risk of major public backlash or other bad consequences.
Even if we have a collective reason to seek very good results even at the cost of taking on risk (slowly diminishing marginal utility, as noted in the post), and even if the community can internally tolerate a few individual disasters (because we have collective resources to fall back on), we get a lot of value from having a reputation for wisdom, caution, and common sense (especially given the natural weirdness of so many core EA ideas).
This doesn’t mean we should necessarily avoid any particular risk, but it seems important for would-be risk-takers to consider, even if they are personally open to bearing a lot of risk for the sake of EA goals.
I appreciate many important points in this essay about the additional considerations for altruistic investing, including taking more risk for return than normal because of lower philanthropic risk aversion, attending to correlations with other donors (current and future), and the variations in diminishing returns curve for different causes and interventions popular in effective altruism. I think there are very large gains that could be attained by effective altruists better making use of these considerations.
But at the same time I am quite disconcerted by the strong forward-looking EMH violating claims about massively outsized returns to the specific investment strategies, despite the limited disclaimers (and the factor literature). Concern for relative performance only goes so far as an explanation for predicting such strong inefficiencies going forward: the analysis would seem to predict that, e.g. very wealthy individuals investing on their own accounts will pile into such strategies if their advantage is discernible. I would be much less willing to share the article because of the inclusion of those elements.
I would also add some of the special anti-risk considerations for altruists and financial writing directed at them, e.g.
Investment blowups that cause personal financial difficulties attributed to effective altruism having fallout on others
Investment writings taken as advice damaging especially valuable assets that would otherwise be used for altruism (just the flip side of the value of improvements)
Relative underperformance (especially blame-drawing) of the broad EA portfolio contributing to weaker reputation for effective altruism, interacting negatively with the very valuable asset (much of the EA portfolio) of entry of new altruists
On net, it still looks like EA should be taking much more risk than is commonly recommended for individual retirement investments, and I’d like to see active development of this sort of thinking, but want to emphasize the importance of caution and rigor in doing so.
I’m not surprised that this is a sticking point. I think people should be highly skeptical of claims about EMH violations, and I didn’t present a lot of evidence, because that would have dramatically increased the length of this essay. I would refer to the sources I linked in the relevant section.
If you assume EMH is broadly true, almost all of the rest of the essay still applies. When I presented some rough estimates for target leverage under full Kelly and half Kelly, I gave estimates for the global market portfolio, as well as for value/momentum/managed futures. The main exception is if EMH is fully true, you might not want to invest in zero-correlation assets because there aren’t really any with positive expected real return (AFAIK), but you’d still want to seek out low correlation to the extent that it’s possible.
Regarding the claims I made about potential future performance:
The claims on value and momentum are consistent with academic factor literature and estimates from evidence-focused investing firms like Research Affiliates.
I didn’t make any strong claims about managed futures performance, but the made-up numbers I gave are far lower than the theoretical backtest performance from the AQR paper I cited, as well as actual performance from practitioners such as Richard Dennis.
I don’t understand what this means, could you explain?
EDIT: I did a little more looking and found these two papers that might be persuasive to value and momentum skeptics:
https://www.aqr.com/Insights/Research/Journal-Article/Fact-Fiction-and-Momentum-Investing https://www.aqr.com/Insights/Research/Journal-Article/Fact-Fiction-and-Value-Investing
The papers themselves are fairly short, but they summarize a lot of evidence from other sources.
I agree that the EMH-consistent version of this still suggests the collective EA portfolio should be more leveraged, and more managing correlations across donors now and over time, and that there is a large factor literature in support of these factors (although in general academic finance suffers from datamining/backtesting and EMH dissipation of real factors that become known).
Re the text you quoted, I just mean that if EAs damage their portfolios (e.g. by taking large amounts of leverage and not properly monitoring it so that leverage ratios explode, taking out the portfolio) that’s fewer EA dollars donated (aside from the reputational effects), and I would want to do more to ensure readers don’t go half-cocked and blow up their portfolios without really knowing what they are doing.
Leveraged ETFs are one way to keep your leverage ratio from blowing up, without any investor effort.
Keeping all the considerations in this post in mind seems very difficult, so perhaps the ideal solution would be if there were an institution to do it for individuals, such as EA Funds or something like it. You could donate to the fund and let them adjust leverage, correlation with other donors to the same cause, and everything else on your behalf.
I like this idea—a centrally-managed fund would be a lot easier than a bunch of people separately doing their own thing. But it creates a problem where the investors/donors in the fund might have unrealistic expectations about performance and could become really unhappy if the fund underperforms the S&P for several consecutive years—which is bound to happen sometimes if the fund is aiming for low correlation. This would be particularly bad from an optics perspective. So there are pros and cons to this idea.
Good point. I think such a fund would want to be very clear that it’s not for the faint of heart and that it’s done in the spirit of trying new risky things. If that message was front and center, I expect the backlash would be less.
I agree that carefully-vetted institutional solutions are probably where one would like to end up.
See Colby Davis on the problems with leveraged ETFs.
Thanks! From my reading of the post, that critique is not really specific to leveraged ETFs? Volatility drag is inherent to leverage in general (and even to non-leveraged investing to a smaller degree).
He says: “In my next post, I’m going to dive into more detail on what is to distinguish between good and bad uses of leverage.” So I found his next post on leverage, which coincidentally is one mentioned in the OP: “The Line Between Aggressive and Crazy”. There he clarifies why he doesn’t like leveraged ETFs:
So unless transaction costs are a dealbreaker, it seems like he’s mainly opposed to the fact that most leveraged ETFs use too much leverage for their level of volatility (relative to the Kelly Criterion, which assumes logarithmic utility of wealth), not that the instrument itself is flawed? Of course, leveraged ETFs implement a “constant leverage” strategy, and later in that post, Davis proposes adjusting the leverage ratio dynamically (which I agree is better, though it requires more work).
Hmm. Maybe you’re right. I guess I was thinking there was an important difference between “constant leverage” and infrequent rebalancing. But I guess that’s a more complicated subject.
That seems to be a common view, but I haven’t yet been able to find any reason why that would be the case, except insofar as rebalancing frequency affects how leveraged you are. I discussed the topic a bit here. Maybe someone who knows more about the issue can correct me.
Can you clarify what exactly your concern is? Which of these best describes your position?
Factor premia basically don’t exist.
Factor premia are much smaller than this essay claims.
Factor premia are substantial, but this essay does not do enough to justify that claim, so it’s not a good reference.
Initially I thought you were saying #1, but from your reply below, it sounds like you at least believe that factor premia exist, so now I’m not sure.
Edit: Or I guess a fourth option: factor premia exist, but generally should not be promoted (possibly because most readers will run into the same behavioral biases that cause the premia to exist).
Or something else entirely.
I just updated the essay to include some more justification for (1) the claim that it’s possible to beat the market and (2) the specific return estimates given. I added the new material to the sections “Improving on conventional wisdom” and “Return expectations”, and have reproduced the latter below.
-------
As something of a corroboration, RAFI provides estimates of forward-looking five-year return for various long/short factors. At the time of this writing, it makes the following predictions for its long/short value and momentum factors (net of transaction costs):
5.7% for US large-cap value
1.1% for US large-cap momentum
8.0% for US small-cap value
6.4% for US small-cap momentum
(RAFI’s projections for foreign developed market factors are similar but generally a bit higher.)
A concentrated long-only portfolio on a particular factor would have approximately the same expected return as the long/short factor plus the broad market (although that’s not quite how the math works).
The underlying indexes used by VMOT make some improvements on RAFI’s simple factor model (see Quantitative Value and Quantitative Momentum for details[^26]), so it might be reasonable to assume a higher expected return for VMOT. If we then subtract fees, we get something close to the original estimate I gave for VMOT (probably a bit higher[^27]).
RAFI believes the value and momentum premia will work as well in the future as they have in the past, and AQR makes the same claim in some of the papers I linked above. They offer good support for this claim, but in the interest of conservatism, we could justifiably subtract a couple of percentage points from expected return to account for premium degradation.
Note that RAFI’s estimates use factor timing—attempting to guess how well factors will perform based on the current market environment, rather than just looking at historical behavior. This practice is not widely accepted; for example, see AQR’s Factor Timing is Deceptively Difficult.
Also note that these numbers only give expected mean return. Even if these estimates are accurate, we could still see much higher or lower returns due to market volatility.
(Dashing this off quickly, so some of this may be inelegantly stated.)
I appreciate this response, and I think that elements of it apply to many other risks an EA could take, including business ventures and work on charitable causes that may be high-return but carry a significant risk of major public backlash or other bad consequences.
Even if we have a collective reason to seek very good results even at the cost of taking on risk (slowly diminishing marginal utility, as noted in the post), and even if the community can internally tolerate a few individual disasters (because we have collective resources to fall back on), we get a lot of value from having a reputation for wisdom, caution, and common sense (especially given the natural weirdness of so many core EA ideas).
This doesn’t mean we should necessarily avoid any particular risk, but it seems important for would-be risk-takers to consider, even if they are personally open to bearing a lot of risk for the sake of EA goals.