+1. Did the January edition of this, and wholeheartedly endorse it! I think it was the best possible use of my time (and at least a few times better than the counterfactual), and it was incredibly well run (both with respect to the organisers and the mentors).
tmychow
Thanks for curating the post!
Just a quick comment to highlight the responses which we have given to the list of disagreements, and to tweak your summary a bit to better reflect what I (not to speak for my other two co-authors) see our post as saying:
A good outside view is that markets are a good way of finding an outside view on a topic — in this case, on transformative AI. Long-term real rates would be higher than they currently are if markets believed that transformative AI was coming in the next ~30 years. If you believe timelines are short, you should personally be saving less or borrowing more. If you believe timelines are short and the market will realise a meaningful amount of time before transformative AI arrives, you should take a short Treasuries position. If you believe that the market should have already realised it and priced it in right now, you should rethink your timelines.
Edit: As it turns out, there’s a nice third party summary which even more concisely captures the essence of what we are trying to get across!
To try to group/summarize the discussion in the comments and offer some replies:
1. ‘Traders are not thinking about AGI, the inferential distance is too large’; or ‘a short can only profit if other people take the short position too’
(a) Anyone who thinks they have an edge in markets thinks they’ve noticed something which requires such a large inferential distance that no one else has seen it.
Any trade requires that the market price eventually converges to the ‘correct’ price
⇒ This argument proves too much – it’s a general argument against ever betting that the market will correct an incorrect price!
Those who are arguing against need to be a clearer argument about why this situation is fundamentally different from any other
Sovereign bond markets are easily some of the most liquid and well-functioning markets ever to exist
(b) Many financial market participants ARE thinking about these issues.
Asset manager Cathie Wood has AGI timelines of 6-12 years and is betting the house on that (“AGI could accelerate growth in GDP to 30-50% per year”)
Masayoshi Son raised $100 billion for Softbank’s Vision Fund on the basis that superintelligence will arrive by 2047
The prospect of AGI is not a Thielian secret.
(c) Do make sure to read section X on “Trade risk and foom risk”, where we acknowledge that if you are both (i) extremely skeptical of market efficiency, and (ii) think foom is the likely takeoff scenario, then trading seems less like a good idea.
2. Stocks versus bonds
Again we refer to detailed discussion in this companion post (appendix 1) on stocks:
(1) Stocks cannot capture the risk of unaligned AI
(2) Developers of TAI might not actually profit much
(3) The developers of TAI might not be publicly traded or even exist yet
(4) The development of TAI could even lower stock prices!
To be clear, though: the economic logic suggests stocks are bad for forecasting timelines (due to the four reasons mentioned in that post)
BUT stocks still could be good ways to earn money betting on short timelines (if the four sources of noise mentioned in that post don’t turn out to hold)
3. Other empirical evidence on real rates
Again we refer to detailed discussion in this companion post (appendix 2) and the important discussion of econometric caveats in section V (“Caveats”)
We emphasize that we would love to have more/better empirical evidence with respect to asset pricing under existential risk (appendix 3)
The challenge with using the historical data (e.g. the Bank of England brought up in the comments) is – as discussed in section IV and in the appendix – that these data are infected with (1) poor estimates of expected inflation and (2) poor estimates of credit risk
For example, in the Schmelzing paper that has been cited: certain claimed spikes in the risk-free rate, e.g. during the Napoleonic war, look far more like a spike in default risk
Or the ex ante real interest rate in World War II in his data is negative, which seems unlikely
- AGI and the EMH: markets are not expecting aligned or unaligned AI in the next 30 years by 10 Jan 2023 16:05 UTC; 339 points) (
- AGI and the EMH: markets are not expecting aligned or unaligned AI in the next 30 years by 10 Jan 2023 16:06 UTC; 117 points) (LessWrong;
- 16 Jan 2023 19:11 UTC; 9 points) 's comment on AGI and the EMH: markets are not expecting aligned or unaligned AI in the next 30 years by (
Pre-committing to not elaborating further, but I wanted to echo what is said in this comment and give a non-anonymous account as someone who (due to personal experience with reporting misconduct) also has similar feelings as KnitKnack.
Edit: I think Chana’s comment is helpful context i.e. it seems good if people’s expectations going in are calibrated to CEA CH’s position is that it is there to “address problems that could prevent the effective altruism community from fulfilling its potential for impact”.
In particular, they “don’t see pursuing justice as [their] mission” and “protecting people from bullies is sometimes a part of [their] work, and something [they’d] always like to be able to do, but it’s not [their] primary goal”.
On a personal note, my advice to people who are considering going to CEA CH is to keep this in mind. To the extent to which there is a trade-off between impact and justice, it may not resolve in a way that is “just” from your POV, and their work on interpersonal harm does take the talent bottleneck seriously e.g. you should probably think about what they perceive the potential impact of the perpetrator to be.