Blog at The Good Blog https://thegoodblog.substack.com/
Nathan_Barnard
Thanks! Fixed
The value of x-risk reduction
wow that’s really interesting, I’ll look more deeply into that. It’s defintely not what I’ve read happened, but at this point I think it’s proably worth me reading the primary sources rather than relying on books.
I have no specifc source saying explicitly that there wasn’t a plan to use nuclear weopons in response to a tactical nuclear weopon. However, I do know what the decsion making stucture for the use of nuclear weopons was. In a case where there hadn’t been a decapiting strike on civillian administrators, the Presidnet was presented with plans from the SIOP (US nuclear plan) which were exclusively plans based around a statagy of descrution of the Communist bloc. The SIOP was the US nuclear plan but triggers for nuclear war weren’t in it anywhere. When induvidual soliders had tactical nuclear weopons their instructions weren’t fixed—they could be instructed explictly not to use tactical nukes, in general though the structure of the US armed forces was to let the commanding officer decide the most approate course of action in a given sitaution.
Second thing to note—tactical nukes were viewed as battlefeild weopons by both sides. Niether viewed them as anything special becaue they were nuclear in the sense that they should engender an all out attack.
So maybe I should clarify that by saying that there was no plan that required the use of tactical nuclear weopons in response a Soviet use of them.
Probably the best single text of US nuclear war plans is The Bomb by Fred Kaplan.
Probably best source on how tactical nukes were used is Command and Control by Eric Schollsser
On the second one, I have a post here that serves to give the wider statagic context:
But it’s not clear to me how Berlin is relvent. It’s relvent insofar as it’s an important factor in why the crisis happened but it’s not clear to me why Berlin increased the chance of escaltion into nuclear war beyond the fact that the Soviet response to a US invasion of Cuba could be to attempt to take Berlin.
Why does the China-India war matter here post Sino-Soviet split?
How close to nuclear war did we get over Cuba?
Thanks for you feedback! Unfortunately I am a smart junior person, so looks like we know who’ll be doing the copy editing
Yeah I think that’s very reasonable
When is AI safety research harmful?
Yes!
I think three really good books are One minute to Midnight, Nuclear folly, and Gambling with Armageddon. Lots of other ones have shortish sections but these three focus more almost completely on the crisis.
Also deals with the issue from the same persecptive I’ve presented here.
The Mystery of the Cuban missile crisis
I think that there is something to the claim being made in the post which is that longtermism as it currently is is mostly about increasing number of people in the future living good lives. It seems genuinely true that most longtermists are prioritising creating happiness over reducing suffering. This is the key factor which pushes me towards longtermist s-risk.
I think the key point here is that it is unsually easy to recuirt EAs at uni compared to when they’re at McKinsey. I think it’s unclear if a) among the the best things for a student to do is go to McKinsey and b) how much less likely it is that an EA student goes to McKinsey. I think it’s pretty unlikely going to McKinsey is the best thing to do, but I also think that EA student groups have a realtively small effect on how often students go into elite coporate jobs (a bad thing from my perspective) at least in software engineering.
I’m obviously not speaking for Jessica here, but I think the reason the comparison is relevant is that the high spend by Goldman ect suggests that spending a lot on recruitment at unis is effective.
If this is the case, which I think is also supported by the success of well funded groups with full or part time organisers, and that EA is in an adversarial relationship to with these large firms, which I think is large true, then it makes sense for EA to spend similar amounts of money trying to attract students.
The relvent comparison is then comparing the value of the marginal student recurited with malaria nets ect.
I’m going through this right now. There have just clearly been times both as a group organiser and in my personal life when I should have just spent/taken money and in hindsight clearly had higher impact, e.g buying uni textbooks so I study with less friction to get better grades.
I view India-Pakistan as the pair of nuclear armed states most like have a nuclear exchange. Do you agree with this and if so what should this imply about our priorities in the nuclear space.
As long as China and Russia have nuclear weapons, do you think it’s valuable for the US to maintain a nuclear arsenal? What about the UK and France?
So the model is more like, during the Russian revolution for instance it’s a 50⁄50 chance that whichever leader came out of that is very strongly selected to have dark traid traits, but this is not the case for the contemporary CCP.
Yeah seems plausible. 99:1 seems very very strong. If it were 9:1 means we’re in a 1/1000 world, 1:2 means an approx 1/10^5. Yeah, I don’t have a good enough knowledge of rulers before they gained close to absolute power to be able to evaluate that claim. Off the top of my head, Lenin, Prince Lvov (the latter led the provisional govt’ after Feb revolution) were not dark triady.
The definition of unstable also looks important here. If we count Stalin and Hitler, both of whom came to power during peacetime, then it seems like also should count Soviet leaders who succeeded Stalin, CCP leaders who succeeded Mao, Bashar al-Assad, Pinochet, Mussolini. Sanity check from that group makes it seem more much like a 1:5 than 1:99. Deng definitely not Dark Triad, nor Bashar, don’t know enough about the others but they don’t seem like it?
If we’re only counting Mao, then the selection effect looks a lot stronger off the top of my head, but should also probably be adjusted because the mean of sadism seems likely much higher after a period of sustained fighting given the effect of prison guards for instance becoming more sadistic over time, and gennerally violence being normalised.
Don’t know enough about psychopathy or machivallianism.
It’s also not completely clear to me that Stalin and Mao were in the top 10% for sadism at least. Both came from very poor peasant societies. I know at least Russian peasant life in 1910 was unbelievably violent and they reguarly did things which we sort of can’t imagine. My general knowledge of European peasant societies—e.g crowds at public executions—makes me think that it’s likely that the average Chinese peasant in 1910 would have scored very highly on sadism. If you look at the response of the Chinese police/army to the 1927 Communist insurgency it was unbelievably cruel.
Makes screening for malicious actors seem worse and genetic selection seem better.
Apologies that this is so scattered.
I’m currently doing research on this! The big big driver is age, income is pretty small comparatively, the education effect goes away when you account for income and age. At least this what I get from the raw health survey of England data lol.
It seems like a strange claim that both the atrocities committed by Hitler, Stalin and Mao were substantially more likely because they had dark triad traits and that when doing genetic selection we’re interested in removing the upper tail, in the article it was the top 1%. To take this somewhat naively, if we think that the Holocaust, and Mao and Stalin’s terror-famines wouldn’t have happened unless all three leaders exhibited dark tetrad traits in the top 1%, this implies we’re living in a world world that comes about with probability 1/10^6, i.e 1 in a million, assuming the atrocities were independent events. This implies a need to come up with a better model.
Edit 2: this is also wrong. Assuming independence the number of atrocious should be binomially distributed with p=1/100 and n=#of leaders in authoritarian regimes with sufficiently high state capacity or something. Should probably be a markov-chain model.
If we adjust the parameters to top 10% and say that the atrocities were 10% more likely to happen if this condition is met, this implies we’re living in a world that’s come about with probability (p/P(Dark triad|Atrocity)^3, where p is the probability of that the atrocity would have occurred without Hitler, Stalin and Mao having dark triad traits. The interpretation of P(Dark triad|Atrocity) is what’s the probability that a leader has a dark triad traits given they’ve committed an atrocity. If you have p as 0.25 and P(Dark|Atrocity) as 0.75 this means we’re living in a 1⁄9 world, which is much more reasonable. But, this makes this intervention look much less good.
Edit: the maths in this section is wrong because I did a 10% probability increase of p as 1.1*p rather than p having an elasticity of 0.1 with respect to the resources put into the intervention or something. I will edit this later.
Excluding 10% of population from politcal power seems like a big ask. If the intervention reduced the probability that someone with dark triad traits coming to power (in a system where they could commit an atrocity) by 10%, which seems ambitious to me, this reduces the probability of an atrocity by 1% (if the above model is correct). Given this requires excluding 10% of the population from politcal power, which I’d say is generously 10%, this means that EV of the intervention is reducing the probability of an atrocity by 0.1%. Although this would increase if the intervention could be used multiple times, which seems likely.
Yeah this is just about the constant risk case, I probably should have referred to it not covering time of perils explicitly, although same mechanism with neglectedness should still apply.