And then they can read the post above to have that question clearly answered!
ESRogs
Any tips on the ‘how’ of funding EA work at such think tanks?
Reach out to individual researchers and suggest they apply for grants (from SFF, LTFF, etc.)? Reach out as a funder with a specific proposal? Something else?
You’re right, my mistake.
opinion which … is mainly advocated by billionairesDo you mean that most people advocating for techno-positive longtermist concern for x-risk are billionaires, or that most billionaires so advocate?I don’t think either claim is true (or even close to true).
The RSP idea is cool.
Dumb question — what part of the post does this refer to?
Fair point.
One reason to keep Tractability separate from Neglectedness is to distinguish between “% of problem solved / extra dollars from anyone” and “% of problem solved / extra dollars from you”.
In theory, anybody’s marginal dollar is just as good as anyone else’s. But by making the distinction explicit, it forces you to consider where on the marginal utility curve we actually are. If you don’t track how many other dollars have already been poured into solving a problem, you might be overly optimistic about how far the next dollar will go.
I think this may be close to the reason Holden(?) originally had in mind when he included neglectedness in the framework.
Note that Vitalik Buterin has also recently started promoting related ideas: Retroactive Public Goods Funding
Which city is next?
Trendfollowing tends to perform worse in rapid drawdowns because it doesn’t have time to rebalance
I wonder if it makes sense to rebalance more frequently when volatility (or trading volume) is high.
The AlphaArchitect funds are more expensive than Vanguard funds, but they’re just as cheap after adjusting for factor exposure.
Do you happen to have the numbers available that you used for this calculation? Would be curious to see how you’re doing the adjustment for factor exposure.
Looking at historical performance of those Alpha Architects funds (QVAL, etc), it looks like they all had big dips in March 2020 of around 25%, at the same time as the rest of the market.
And I’ve heard it claimed that assets in general tend to be more correlated during drawdowns.
If that’s so, it seems to mitigate to some extent the value of holding uncorrelated assets, particularly in a portfolio with leverage, because it means your risk of margin call is not as low as you might otherwise think.
Have you looked into this issue of correlations during drawdowns, and do you think it changes the picture?
Ah, good point! This was not already clear to me. (Though I do remember thinking about these things a bit back when Piketty’s book came out.)
I just feel like I don’t know how to think about this because I understand too little finance and economics
Okay, sounds like we’re pretty much in the same boat here. If anyone else is able to chime in and enlighten us, please do so!
My superficial impression is that this phenomenon it somewhat surprising a priori, but that there isn’t really a consensus for what explains it.
Hmm, my understanding is that the equity premium is the difference between equity returns and bond (treasury bill) returns. Does that tell us about the difference between equity returns and GDP growth?
A priori, would you expect both equities and treasuries to have returns that match GDP growth?
But if you delay the start of this whole process, you gain time in which you can earn above-average returns by e.g. investing into the stock market.
Shouldn’t investing into the stock market be considered a source of average returns, by default? In the long run, the stock market grows at the same rate as GDP.
If you think you have some edge, that might be a reason to pick particular stocks (as I sometimes do) and expect returns above GDP growth.
But generically I don’t think the stock market should be considered a source of above-average returns. Am I missing something?
You could make an argument that a certain kind of influence strictly decreases with time. So the hinge was at the Big Bang.
But, there (probably) weren’t any agents around to control anything then, so maybe you say there was zero influence available at that time. Everything that happened was just being determined by low level forces and fields and particles (and no collections of those could be reasonably described as conscious agents).
Today, much of what happens (on Earth) is determined by conscious agents, so in some sense the total amount of extant influence has grown.
Let’s maybe call the first kind of influence time-priority, and the second agency. So, since the Big Bang, the level of time-priority influence available in the universe has gone way down, but the level of aggregate agency in the universe has gone way up.
On a super simple model that just takes these two into account, you might multiply them together to get the total influence available at a certain time (and then divide by the number of people alive at that time to get the average person’s influence). This number will peak somewhere in the middle (assuming it’s zero both at the Big Bang and at the Heat Death).
That maybe doesn’t tell you much, but then you could start taking into account some other considerations, like how x-risk could result in a permanent drop of agency down to zero. Or how perhaps there’s an upper limit on how much agency is potentially available in the universe.
In any case, it seems like the direction of causality should be a pretty important part of the analysis (even if it points in the opposite direction of another factor, like increasing agency), either as part of the prior or as one of the first things you update on.
Separately, I still don’t see the case for building earliness into our priors, rather than updating on the basis of finding oneself seemingly-early.
Do you have some other way of updating on the arrow of time? (It seems like the fact that we can influence future generations, but they can’t influence us, is pretty significant, and should be factored into the argument somewhere.)
I wouldn’t call that an update on finding ourselves early, but more like just an update on the structure of the population being sampled from.
And the current increase in hinginess seems unsustainable, in that the increase in hinginess we’ve seen so far leads to x-risk probabilities that lead to drastic reduction of the value of worlds that last for eg a millennium at current hinginess levels.
Didn’t quite follow this part. Are you saying that if hinginess keeps going up (or stays at the current, high level), that implies a high level of x-risk as well, which means that, with enough time at that hinginess (and therefore x-risk) level, we’ll wipe ourselves out; and therefore that we can’t have sustained, increasing / high hinginess for a long time?
(That’s where I would have guessed you were going with that argument, but I got confused by the part about “drastic reduction of the value of worlds …” since the x-risk component seems like a reason the high-hinginess can’t last a long time, rather than an argument that it would last but coincide with a sad / low-value scenario.)
It seems to me that there is some tension between these two criticisms — you want EA to focus less on analysis, but you also don’t want us to be too wedded to our conclusions. So how are we supposed to change our minds about the conclusions w/o doing analysis?
My guess (based on the rest of the essay), is that you want our analysis to be more informed by practice.
But I just want to emphasize that, in my view, analysis (and specifically cause neutrality) is what makes EA unique. If you take out the analysis, then it’s not clear what value EA has to offer the rest of the charity / social impact world.