See also discussion at https://www.lesswrong.com/posts/MKS4tJqLWmRXgXzgY/why-should-i-assume-ccp-agi-is-worse-than-usg-agi-1?commentId=bhngv6ziB769T8ycX
OscarD🔸
When is defense in depth unhelpful?
I am pre-registering my forecasts for the amount of prize money each essay will win. In brief, I expect that these three essays will win just over half the prize money:
Utilitarians Should Accept that Some Suffering Cannot be “Offset”
Are longtermist ideas getting harder to find?
Discussions of Longtermism should focus on the problem of Unawareness
I didn’t spend much time on these forecasts though—mainly it is based on karma with an adjustment based on my subjective judgement of the essay’s title/summary.
OscarD’s Quick takes
This seems true and useful to me, I’m surprised at the low agreement and karma scores!
I discuss another example here, where (using your framing) we cannot rule out that we are in the hinge of history, and since the stakes would then be so high, we ought to act significantly on that basis.
Interested if you agree with this example.
Not sure I follow properly—why would liberal democracy not matter? I think whether biological humans are themselves enhanced in various ways matters less than whether they are getting superhuman (and perhaps super-wise advice). Though possibly wisdom is different and you need the principal to themselves be wise, rather than just getting wise advice.
Possibly, though I expect ASI could also be used to lock in one’s values such that there will be more stasis unless the people in pwoer deliberately embrace dynamism and liberalism of values.
Interesting, I hadn’t seen that interview. I stand by the overall claim that AI safety is more prominent in the West than China, though I am glad to see more people in China becoming safety-oriented.
Re the CCP being more redistributionist: that could be the case, but I am also worried that once individuals aren’t economically useful their interests won’t be looked out for as much by the state, unless they stay politically empowered, which requires democracy. I think the CCP would still care enough about its people to distribute AI benefits to them even when the people aren’t useful investments, but I’m unsure. Whereas I think I would be more surprised if e.g. the US let its people be greatly deprived even if they were ~useless deadweights.
I agree that both possibilities are very risky. Interesting re belief in hell being a key factor, I wasn’t thinking about that.
Even if a future ASI would be able to very efficiently manage todays economy in a fully centralised way, possibly the future economy will be so much more complicated that it will still make sense to have some distributed information processing in the market rather than have all optimisation centrally planned? Seems unclear to me one way or the other, and I assume we won’t be able to know with high confidence in advance what economic model will be most efficient post-ASI. But maybe that just reflects my economic ignorance and others are justifiedly confident.
Thanks, not over-critical at all! Good point: I am fairly confident that by my values a US-led future would be better, but I am quite uncertain how large this effect is, and each individual consideration/argument is fairly fuzzy.
I don’t have any particular China expertise, but I work in international AI governance so try to stay quite familiar with at least AI-relevant aspects of things going on in China.
Moral innovation: I was considering citing something like comparing some university rankings for philosophy vs natural sciences where Chinese universities seem to do better in the latter than the former. But I’m not sure how much to trust such rankings, and my claim is more vibes-based that even though things I hear are very Western-tinted, I feel far more likely to hear about cutting-edge scientific work coming out of China than cutting-edge philosophy. Though yes, of course it is also the case that I personally just find Western philosophy more useful (specifically analytic philosophy, not continental).
Economic stasis: True, I think China is becoming more innovative and dynamic technologically/economically, and it is possible it will overall catch up with the West. Though my guess is that liberal, capitalist political-economic systems will still overall prove better for long-run innovation.
Great points, I agree both of those are concerns, and don’t have much to add. I think the risk of further democratic backsliding in the U.S. is very real, and could be AI-exacerbated. But I suppose a risk of backsliding is better than China already being autocratic.
And interesting re alt proteins, yes that seems quite plausible to me! If this ends up being hte crux it would probably be worth foing more surveys and social science work to understand this better.
Interesting, yes perhaps liberalising/democratising China may be desirable but not worth the geopolitical cost to try to make happen.
How good would a CCP-dominated AI future be?
I would like to separate out two issues:
Is longtermism a crux for our decisions?
Should we spend a lot of time talking about longtermist philosophy?
On 1, I think it is more crux-y than you do, probably (and especailly that it will be in the future). I think currently, there are some big ‘market’ inefficiencies where even shortermists don’t care as much as idealised versions of their utility functions would. If shortermists institutions start acting more instrumentally rationally, lots of the low-hanging fruit of x-risk reduction interventions will be taken, and longtermists will need to focus specifically on the weirder things that are more specific to our views. E.g. ensuring the future is large, and that we don’t spread wild animal suffering to the stars, etc. So actually maybe I agree that for now lots of longtermists should focus on x-risks while there are still lots of relatively cheap wins, but I expect this to be a pretty short-lived thing (maybe a few decades?) and that after that longtermism will have a more distinct set of recommendations.
On 2, I also don’t want to spend much more time on longtermist philosphy since I am already so convinced of longtermism that I expect another critique like all the ones we have already had won’t move me much. And I agree better-futures style work (especially empirically groudned work) seems more promising.
Good point, yes I think empirical findings that have a large bearing on what longtermists should be doing would also count for me, and yes perhaps empirical work is still easier to come up with new important considerations in.
That’s a good point—responding to existing ideas does seem less exciting and original, but I agree is still valuable, and perhaps under-rewarded given it is less exciting.
Are longtermist ideas getting harder to find?
We should act as if we live at the hinge of history
Nice, yes I think we roughly agree! (Though maybe you are nobler than me in terms of finding a broader range of views provocatively plausible and productive to engage with.)
Right, but because we have limited resources, we need to choose whether to invest more in just a few stronger layers, or less each in more different layers. Of course in an ideal world we have heaps of really strong layers, but that may be cost-prohibitive.