Thanks for this thoughtful reflection. I do want to register that I think I disagree there wouldn’t be much EA to do post- a nuclear exchange between the US and Russia—it would be a scary hard world to live in, and one where many of our previous priorities are no longer relevant, but it’s work I think we could do and could improve the trajectory of civilization by doing.
Kelsey Piper
Though I should say that I think tac nuke use in Ukraine is also a reasonable trigger to leave, depending on your personal situation, productivity, ease of leaving, where you’re going, etc—I really just want people to be sure they are doing the EV calculations and not treating risk-minimization as the sudden controlling priority.
My impression is that US intelligence has been very impressive with regard to Russia’s military plans to date. US officials confidently called the war in Ukraine by December and knew the details of the planned Russian offensive. They’re saying now that they think Putin is not imminently planning to use a tactical nuke. If they’re wrong and Putin uses a tactical nuke next week, that’d be a big update they also won’t predict further nuclear escalation correctly, but my model is that before the use of a tactical nuke, we’ll get US officials saying “we’re worried Russia plans to use a tactical nuke”. If I’m right about that, then I further predict they’ll be giving pretty accurate assessments of whether Russia is going to escalate from there.
That suggests a threshold to leave of [ tactical nuke use in Ukraine, if it surprises US officials] or [after tactical nuke use in Ukraine and a warning from US officials that Putin seems inclined to escalate further after tactical nuke use], which would be a 10x or more further update on risk in my view.
Hmm, what mechanism are you imagining for advantage from getting out of cities before other people? You could have already booked an airbnb/rented a house/etc before the rush, but that’s an argument for booking the airbnb/renting the house, not for living in it.
To be clear, I will also leave SF in the event of a strong signal that we’re on the brink of nuclear war—such as US officials saying they believe Russia is preparing for a first launch, or the US using a nuclear weapon ourselves in response to Russian use, or strategic rather than tactical Russian use (for example against Kyiv), or Russia declaring war on NATO or declaring intent to use nuclear weapons outside Russian territory.
I mostly expect overreaction in cases of a weaker signal such as a Russian “test” on territory Russia claims as Russian, or tactical use, or Russia inducing a meltdown at a nuclear power plant—all of which would be scary, destabilizing, precedent-setting events that dramatically raise the odds of a nuclear war, but which I wouldn’t call a “clear and unambiguous signal that a large amount of the world may be utterly destroyed in a matter of hours”.
In this framework, before the tac nuke use in Ukraine, your expected life hours lost was remaining life hours*P(nuke in your location | nuke in Ukraine) * P (nuke in Ukraine), so your subsequent expected life hours last should change by a factor of 1/P(nuke in ukraine), or about six.
Though I think straightforwardly applying that framework is wrong, because it assumes that if you don’t flee as soon as there’s nuke use in Ukraine, you don’t flee at all even at subsequent stages of escalation; instead, you want P(nuke in your location| nuke in Ukraine and no later signs of danger which prompt you to flee). To figure out your actual expected costs from not fleeing as soon as there’s tactical nuke use in Ukraine, you need to have an estimate of how likely it is that there’d be some warning after the tactical nuke use before a nuclear war started.
This is also tricky because I don’t think it lets you compare to the option I’d actually advocate for, which is something like “flee at a slightly later point”—the US has good intel on Russia, and it seems likely that US officials will know if Russia appears to be headed towards nuclear war. If you have to compare “flee the instant a tactical nuke is used in Ukraine” or “stay no matter what”, “stay no matter what” doesn’t look good, but what you want to compare is “flee the instant a tactical nuke is used in Ukraine” to “flee at some subsequent sign of danger”—that is, the real question is how many life-hours you get by fleeing early that you don’t get by fleeing late (either because we don’t get any warning, or because by then many people are panicking and fleeing).
You’re right, my post doesn’t make clear enough the difference between current risk and risk conditional on nuclear use in Ukraine.
Trying to figure out expected hours lost in the latter case seems to depend a ton on which of their forecasts you look at. My instinctive reaction was that 2000 is way too high, as they’re at 16% on Russia using a nuclear weapon in Ukraine so it can only increase risk by a factor of 6 or so if it happens, but they state it’d raise risk by a factor of 10 or so if it happened. I’m going to use the factor of 6 because I don’t understand how they got 10 and it reads like it might just be an order of magnitude estimate.
Using their ‘forecasters’ aggregate’, where the mean is 13, the hours lost conditional on use in Ukraine is still less than 100 hours. Using their ‘full range’, where the mean is 150, the hours lost conditional on use in Ukraine is 1000. That suggests it’s quite important to figure out which of those aggregating methods make more sense, as I suspect the costs of fleeing are generally higher than 100 but less than 1000 hours. (Though fleeing in the least costly way could reduce the costs of fleeing enough to be less than 100 hours and thus worth it even in the lower case.)
Overreacting to current events can be very costly
And Buck Shlegeris and Nate Thomas and Eitan Fischer and Adam Scherlis (though Buck didn’t attend Stanford and just hung out with us because he liked us). I wish I knew how to replicate whatever we were smoking back then. I’ve tried a couple times but it’s a hard act to follow.
Fwiw, I gave Scott permission to mention the above; I think by some metrics of promisingness as an EA I was obviously a promising EA even when I was also failing out of college, and in particular my skillset is public communications which means people could directly evaluate my EAmpromisingness via my blog posts even when by legible societal metrics of success I was a bit of a mess.
To be clear, though, I don’t think EAs should worry about monkeypox more than they currently are—EAs are already pretty aware that pandemics can be very bad and in favor of doing more to detect them early, understand how exponential growth works, and are in a pretty functional information ecosystem where they’ll hear about monkeypox if it becomes a matter of greater personal safety concern or if we get to the point where it’s a good idea for people to get smallpox vaccinations.
Huh, interesting example of “should you reverse any advice you hear?”. I have mostly encountered US articles in which CDC, etc experts are quoted telling the public unhelpful things like “very few people have monkeypox in the US right now” and “there’s no evidence this variant is more transmissible” and “don’t panic”.
I hadn’t thought of this and I’m actually intrigued—it seems like prediction markets might specifically be good for situations where everyone ‘knows’ something is up but no one wants to be the person to call it out. The big problem to my mind is the resolution criterion: even if someone’s a fraud, it can easily be ten years before there’s a big article proving it.
Disclaimer that I’ve given this less than ten minutes of thought, but I’m now imagining a site pitched at journalists as an aggregated, anonymous ‘tip jar’ about fraud and misconduct. I think lots of people would at least look at that when deciding which stories to pursue. (Paying sources, or relying on sources who’d gain monetarily from an article about how someone is a fraud, is extremely not okay by journalistic ethics, which limits substantially what you can do here.)
ooooops, I’m sorry re: the imposter syndrome—do you have any more detail? I don’t want to write in a way that causes that!
I think checking whether results replicate is also important and valuable work which is undervalued/underrewarded, and I’m glad you do it.
One dynamic that seems unique to fraud investigations specifically is that while most scientists have some research that has data errors or isn’t robust, most aren’t outright fabricating. Clear evidence of fake data more or less indicts all that scientists’s other research (at least to my mind) and is a massive change to how much they’ll tend to be respected and taken seriously. It can also get papers redacted, while (infuriatingly) papers are rarely redacted for errors or lack of robustness.But in general I think of fraud as similar in some important ways to other bad research, like the lack of incentives for anyone to investigate it or call it out and the frequency with which ‘everyone knows’ that research is shady or doesn’t hold up and yet no one wants to be the one to actually point it out.
Liars
Matt Levine on the Archegos failure
Update: I have since been told that the deadline is going to be sooner, August 4th! So sorry for the late change.
August 18th and unfortunately US only—I’m hoping to change that someday but Vox has not taken the legal and regulatory steps that’d make it possible for them as a US-based company to make hires outside the US.
I don’t know, but I think likely days not weeks. Tactical nuke use will be a good test ground for this—do we get advance warning from US officials about that? How much advance warning?