Which city is next?
Trendfollowing tends to perform worse in rapid drawdowns because it doesn’t have time to rebalance
I wonder if it makes sense to rebalance more frequently when volatility (or trading volume) is high.
The AlphaArchitect funds are more expensive than Vanguard funds, but they’re just as cheap after adjusting for factor exposure.
Do you happen to have the numbers available that you used for this calculation? Would be curious to see how you’re doing the adjustment for factor exposure.
Looking at historical performance of those Alpha Architects funds (QVAL, etc), it looks like they all had big dips in March 2020 of around 25%, at the same time as the rest of the market.
And I’ve heard it claimed that assets in general tend to be more correlated during drawdowns.
If that’s so, it seems to mitigate to some extent the value of holding uncorrelated assets, particularly in a portfolio with leverage, because it means your risk of margin call is not as low as you might otherwise think.
Have you looked into this issue of correlations during drawdowns, and do you think it changes the picture?
Ah, good point! This was not already clear to me. (Though I do remember thinking about these things a bit back when Piketty’s book came out.)
I just feel like I don’t know how to think about this because I understand too little finance and economics
Okay, sounds like we’re pretty much in the same boat here. If anyone else is able to chime in and enlighten us, please do so!
My superficial impression is that this phenomenon it somewhat surprising a priori, but that there isn’t really a consensus for what explains it.
Hmm, my understanding is that the equity premium is the difference between equity returns and bond (treasury bill) returns. Does that tell us about the difference between equity returns and GDP growth?
A priori, would you expect both equities and treasuries to have returns that match GDP growth?
But if you delay the start of this whole process, you gain time in which you can earn above-average returns by e.g. investing into the stock market.
Shouldn’t investing into the stock market be considered a source of average returns, by default? In the long run, the stock market grows at the same rate as GDP.
If you think you have some edge, that might be a reason to pick particular stocks (as I sometimes do) and expect returns above GDP growth.But generically I don’t think the stock market should be considered a source of above-average returns. Am I missing something?
You could make an argument that a certain kind of influence strictly decreases with time. So the hinge was at the Big Bang.
But, there (probably) weren’t any agents around to control anything then, so maybe you say there was zero influence available at that time. Everything that happened was just being determined by low level forces and fields and particles (and no collections of those could be reasonably described as conscious agents).
Today, much of what happens (on Earth) is determined by conscious agents, so in some sense the total amount of extant influence has grown.
Let’s maybe call the first kind of influence time-priority, and the second agency. So, since the Big Bang, the level of time-priority influence available in the universe has gone way down, but the level of aggregate agency in the universe has gone way up.
On a super simple model that just takes these two into account, you might multiply them together to get the total influence available at a certain time (and then divide by the number of people alive at that time to get the average person’s influence). This number will peak somewhere in the middle (assuming it’s zero both at the Big Bang and at the Heat Death).
That maybe doesn’t tell you much, but then you could start taking into account some other considerations, like how x-risk could result in a permanent drop of agency down to zero. Or how perhaps there’s an upper limit on how much agency is potentially available in the universe.
In any case, it seems like the direction of causality should be a pretty important part of the analysis (even if it points in the opposite direction of another factor, like increasing agency), either as part of the prior or as one of the first things you update on.
Separately, I still don’t see the case for building earliness into our priors, rather than updating on the basis of finding oneself seemingly-early.
Do you have some other way of updating on the arrow of time? (It seems like the fact that we can influence future generations, but they can’t influence us, is pretty significant, and should be factored into the argument somewhere.)
I wouldn’t call that an update on finding ourselves early, but more like just an update on the structure of the population being sampled from.
And the current increase in hinginess seems unsustainable, in that the increase in hinginess we’ve seen so far leads to x-risk probabilities that lead to drastic reduction of the value of worlds that last for eg a millennium at current hinginess levels.
Didn’t quite follow this part. Are you saying that if hinginess keeps going up (or stays at the current, high level), that implies a high level of x-risk as well, which means that, with enough time at that hinginess (and therefore x-risk) level, we’ll wipe ourselves out; and therefore that we can’t have sustained, increasing / high hinginess for a long time?(That’s where I would have guessed you were going with that argument, but I got confused by the part about “drastic reduction of the value of worlds …” since the x-risk component seems like a reason the high-hinginess can’t last a long time, rather than an argument that it would last but coincide with a sad / low-value scenario.)
Just a quick thought on this issue: Using Laplace’s rule of succession (or any other similar prior) also requires picking a somewhat arbitrary start point.
Doesn’t the uniform prior require picking an arbitrary start point and end point? If so, switching to a prior that only requires an arbitrary start point seems like an improvement, all else equal. (Though maybe still worth pointing out that all arbitrariness has not been eliminated, as you’ve done here.)
The Nobel Prize comes with a million dollars (9,000,000 SEK). 50k doesn’t seem like that much, in comparison.
Another Karnofsky series that I thought was important (and perhaps doesn’t fit anywhere else) is his posts on The Straw Ratio.
Also: Charity: The video game that’s real, by Holden Karnofsky
FYI Purchase fuzzies and utilons separately is showing up twice in the list.
ballistic ones are faster, but reach Mach 20 and similar speeds outside of the atmosphere
This seems notable, since there is no sound w/o atmosphere. So perhaps ballistic missiles never actually engage in hypersonic flight, despite reaching speeds that would be hypersonic if in the atmosphere? Though I would be surprised if they’re reaching Mach 20 at a high altitude and then not still going super fast (above Mach 5) on the way down.
according to Thomas P. Christie (DoD director of Operational Test and Evaluation from 2001–2005) current defense systems “haven’t worked with any degree of confidence”. A major unsolved problem is that credible decoys are apparently “trivially easy” to build, so much so that during missile defense tests, balloon decoys are made larger than warheads—which is not something a real adversary would do. Even then, tests fail 50% of the time.
I didn’t follow this. What are the decoys? Are they made by the attacking side or the defending side? Why does them being easy to build mean that people make large ones during tests, and why wouldn’t that also happen in a real attack? Why is it notable that tests still fail at a high rate in the presence of large decoys?
Thanks! Just read it.
I think there’s a key piece of your thinking that I don’t quite understand / disagree with, and it’s the idea that normativity is irreducible.
I think I follow you that if normativity were irreducible, then it wouldn’t be a good candidate for abandonment or revision. But that seems almost like begging the question. I don’t understand why it’s irreducible.
Suppose normativity is not actually one thing, but is a jumble of 15 overlapping things that sometimes come apart. This doesn’t seem like it poses any challenge to your intuitions from footnote 6 in the document (starting with “I personally care a lot about the question: ‘Is there anything I should do, and, if so, what?’”). And at the same time it explains why there are weird edge cases where the concept seems to break down.
So few things in life seem to be irreducible. (E.g. neither Eric nor Ben is irreducible!) So why would normativity be?
[You also should feel under no social obligation to respond, though it would be fun to discuss this the next time we find ourselves at the same party, should such a situation arise.]
Don’t Make Things Worse: If a decision would definitely make things worse, then taking that decision is not rational.
Don’t Commit to a Policy That In the Future Will Sometimes Make Things Worse: It is not rational to commit to a policy that, in the future, will sometimes output decisions that definitely make things worse.
One could argue that R_CDT sympathists don’t actually have much stronger intuitions regarding the first principle than the second—i.e. that their intuitions aren’t actually very “targeted” on the first one—but I don’t think that would be right. At least, it’s not right in my case.
I would agree that, with these two principles as written, more people would agree with the first. (And certainly believe you that that’s right in your case.)
But I feel like the second doesn’t quite capture what I had in mind regarding the DMTW intuition applied to P_′s.
Consider an alternate version:
If a decision would definitely make things worse, then taking that decision is not good policy.
If a decision would definitely make things worse, a rational person would not take that decision.
It seems to me that these two claims are naively intuitive on their face, in roughly the same way that the ”… then taking that decision is not rational.” version is. And it’s only after you’ve considered prisoners’ dilemmas or Newcomb’s paradox, etc. that you realize that good policy (or being a rational agent) actually diverges from what’s rational in the moment.
(But maybe others would disagree on how intuitive these versions are.)
EDIT: And to spell out my argument a bit more: if several alternate formulations of a principle are each intuitively appealing, and it turns out that whether some claim (e.g. R_CDT is true) is consistent with the principle comes down to the precise formulation used, then it’s not quite fair to say that the principle fully endorses the claim and that the claim is not counter-intuitive from the perspective of the original intuition.
Of course, this argument is moot if it’s true that the original DMTW intuition was always about rational in-the-moment action, and never about policies or actors. And maybe that’s the case? But I think it’s a little more ambiguous with the ”… is not good policy” or “a rational person would not...” versions than with the “Don’t commit to a policy...” version.
EDIT2: Does what I’m trying to say make sense? (I felt like I was struggling a bit to express myself in this comment.)