I wonder if Berkeley had a notably high rate of both no-shows and last-minute interest in attending because the FTX crisis two weeks prior probably changed a lot of peoples’ calculus about whether and in what ways they want to be engaged with the EA community/EA network. (Some in the direction of ‘actually I don’t want to attend; I have lost a lot of belief that EA is worthwhile’, and some in the direction of ‘I’ve been trying to make sense of this alone and would particularly benefit from attending discussions and talking with likeminded people’).
A fixed rate of no-shows is much easier to handle than an unprecedented rate, and a predictable surge in interest right before/right after the deadline is much easier to plan for than an unusual one. I’m curious how Berkeley compares to other recent EAGs that way.
Kelsey Piper
I think Asterisk is deliberately trying to look different from Substack, Medium, news sites, etc., rather than doing so accidentally/ as a product of being unaware of how to look like those sites.
My best guess is:
- if you asked SBF “did you know that Kelsey was writing a story for Vox based on your conversation with her, sharing things you said to her in DMs?” the answer would be yes. Again, I sent an email explicitly saying I was writing about this, from my Vox account with a Vox Media Senior Reporter footer, which he responded to.
- if you asked SBF “is Kelsey going to publish specifically the parts of the conversation that are the most embarrassing/look bad”, the answer would be no.
- if you asked me “is SBF okay with this being published”, I think I would have said “I know he knows I’m writing about it and I’m pretty damn sure he knows how “on the record” works but he’s probably going to be mad about the tone and contents”.
I agree that it would be bizarre and absurd to believe, and disingenuous to claim, “Sam thought Kelsey would make him look extremely bad, and was okay with this”.
I believed that SBF thought not that the conversation was secret but that the coverage would be positive.
Some thoughts about this—
I genuinely thought SBF spoke to me with the knowledge I was a journalist covering him, knew we were on the record, and knew that an article quoting him was going to happen.*** The reasons I thought that were:
- I knew SBF was very familiar with how journalism works. At the start of our May interview I explained to him how on the record/off the record works, and he was (politely) impatient because he knew it because he does many interviews.
- I knew SBF had given on the record interviews to the New York Times and Washington Post in the last few days, so while it seemed to me like he clearly shouldn’t be talking to the press, it also seemed like he clearly was choosing to do so for some reason and not at random. Edited to add: additionally, it appears that immediately after our conversation concluded he called another journalist to talk on the record and say among other things that he’d told his lawyer to “go fuck himself” and that lawyers “don’t know what they’re talking about”. I agree it is incredibly bizarre that Sam was knowingly saying things like this on the record to journalists.
- Obviously SBF’s communications right now are going to be subpoenaed and presented in court. I can still get why he might not want them in the news, but that does seem like a significant constraint on how private he expected them to be. If we’d talked over Signal I’d feel differently.
- When I emailed him “hey! Writing about what you said happened and your plans now. Just wanted to confirm you still have access to your Twitter account and that isn’t a troll or something- Kelsey Piper, Vox Media”, it seemed possible to me that he would claim it was a troll, or decline to answer, or ask me to take the interview retroactively off the record (which by journalism norms I am not obliged to do, but I would probably have worked with him to at least some degree—there are complicated moral tradeoffs in both directions, at that point!). But he didn’t, which I thought was because he was okay with my writing a story about our conversation.
With all that said, I was less careful with SBF than I am with most people. With most people, if it seemed possible they were under seriously mind-altering substances, I’d hesitate to interview them. If I was not completely sure they understood they might appear in press, I would remind them, and maybe even at particularly salacious quotes ask “okay to quote you on that?” Not all journalists do that, but I don’t want to hurt people, and I don’t want to be untrustworthy to people.But in this case it felt to me like I had significant duties in the other direction—to get answers that made sense, if there were any, to the question of how this happened and (though as expected this did not have a thrilling answer) where the money was. A $10billion missing funds situation is just very very very different and much larger than most situations, and I think the right place on that tradeoff is also different.
I don’t think (as we all fret about these days) that the ends justify the means, or that it’s okay to break commitments of confidentiality as long as you have a good enough reason. I think I do believe that it’s okay to not be as proactive about commitments of confidentiality, not work as hard to remind people that they probably should want confidentiality when they seem perfectly happy to talk to you, when something happened to ten billion dollars.
I think it might be good if journalists had something like the Miranda warnings, where if you want to quote someone you have to first explicitly with established language warn them how journalism works and how to opt out, and if you failed to warn them then you don’t get to quote them. I think I would sign on to make that a norm of journalism. But it isn’t, and so I’m just balancing a lot of things that all seem important.
It seems possible that SBF thought that as a person involved in EA I wouldn’t hurt him, another person involved in EA. I don’t think that would be the right approach. It is not my job to protect EA, and that’s not what I do. It’s my job to try to make the world a better place through saying true things on topics that really really matter. I share values and priorities with many of you here, but my job comes with obligations and duties on top of those, and I think it’s overall good for the world that that’s so.
With all that said—I never intend to take a subject by surprise in publishing, and thought I had not done so. I wish that had happened differently, though I think I had serious professional obligations to write about this conversation.
*** This is edited. The original said ‘I genuinely thought SBF was comfortable with our interview being published and knew that was going to happen’, which is as written kind of absurd—obviously he didn’t want the mean stuff in print—so I’m trying to be clearer about what specifically I thought he understood and what specifically I thought he knew.
I think we added alt text to all screenshots in the piece and if we missed one let us know.
I’ve had some people say to me “I’d like all future conversations with you to be off the record/confidential unless we agree otherwise”. I agreed to this.
I think EAs are broadly too quick to class things as infohazards instead of reasoning them through, but natsec seems like a pretty well defined area where the reasons things are confidential are pretty concrete .
Some examples of information that is pretty relevant to nuclear risk and would not be discussed on this forum, even if known to some participants:
How well-placed are US spies in the Russian government and in Putin’s inner circle?How about Russian spies in the US government? Do the Russians know what the US response would be in the event of various Russian actions?
Does the US know where Russia’s nuclear submarines are? Can we track their movements? Do we think we could take them out if we had to? This would require substantial undisclosed tech. If we did know this, it would be a tightly held secret; degrading Russia’s second-strike capabilities (which is one effect of knowing where their subs are) might push them towards a first strike.
Relatedly, are we at all worried Russia knows where our submarines are?
In a similar genre, does the US know how to shoot down ICBMs? With 10% accuracy? 50%? 80%? Accuracy would have to be very good to be a game changer in a full exchange with Russia. (High accuracy would require substantial undisclosed technology, and be undisclosed for some of the same reasons plus to avoid encouraging other countries to innovate on weapon delivery.)
Does either side have other potentially game-changing secret tech (maybe something cyberwarfare-based?)
People making decisions on nuclear war planning have access to the answers to all of these questions, and those answers might importantly inform their decisionmaking.
The black dots assumes Russia has 2000 functional missiles that they successfully launch against the US and that successfully detonate, and that the US is unable to shoot many of them down/destroy missile launch sites before launch. My understanding is, concretely, that even if all Russian missiles currently reported ready for launch are launched, there’s 1500 of them not 2000, and that one would expect many to be used against non-US targets (in Ukraine and Europe). The 500 scenario (purple triangles) seems likelier to me for how many targets Russia would try to hit.
Further, my impression of the competence of the Russian military, the readiness of their forces, the state of upkeep on their nukes and missiles, the willingness of individual commanders ordered to launch to do so, etc. is quite low. In many cases they have had an incredibly embarrassingly low success rate at firing missiles at Ukraine, which is an easier task than launching on short notice in a nuclear war. They seem to be using un-upgraded Soviet technology that is often degrading and failing, and the theft of parts for sale on the black market isn’t uncommon.
For each nuclear missile, lots of things need to go right: the missile needs to be in good shape/ready to launch, the people ordered to launch need to do it, the missile needs to be successfully launched before anyone destroys the launch site, the missile needs to not be shot down, the missile needs to successfully be aimed at the target (this isn’t even very hard, but there’ve been notable Ukraine failures) and the missile needs to actually detonate at the right time. US capabilities to shoot down ICBMs, if such capabilities exist, would be extremely secret (we have no such public capabilities) but it seems like we almost definitely cannot shoot down or prevent the launch of submarine-launched missiles (of which there’d be perhaps a dozen). My personal median expectation is that submarine-launched missiles will likely hit and detonate and a relatively small share of non-submarine-launched missiles will hit and detonate. If Russia is also worried about this, they’ll probably concentrate missiles further on critical targets.
This is decision-relevant in a couple of respects, the most important being that the fewer missiles hit and detonate, the less likely that a nuclear exchange results in a collapse of civilization/post-apocalyptic wasteland, though note that even if you assume all the purple triangles hit you don’t have to go very far to be safe, and if we evacuate we’ll evacuate to somewhere outside any of the purple triangles. People in major coastal cities should be more worried as they’re likelier to be targeted by submarine-launched missiles which I think almost definitely 1) work 2) would be launched if ordered 3) could not be prevented from launching and 4) cannot be shot down, and people near US military bases should assume a lot of missiles would be launched at that target to make sure at least some get through.People elsewhere in purple triangles are at, in my assessment, 5x to 20x less risk from a combination of more uncertainty about whether their city will be targeted and much higher likelihood an attempt wouldn’t work.
Plausible cruxes:
I strongly do not expect full nuclear exchange in immediate response to Russia tac nuke use; the situation that seems plausible to me would involve conventional retaliation against Russian forces in Ukraine, Syria, etc., followed by Russia responding to that. So I think leaving at a further point still means leaving well ahead of a full exchange.
I think my work is much more valuable in worlds without a full nuclear exchange; iirc you are pretty doomy on current trajectories, so maybe you actually think your work is more valuable in worlds with a full nuclear exchange, or at least of comparable value?I think I’m twice as productive at home, for reasons relating to childcare, disruption associated with fleeing, personal traits, my home being well set up to meet my needs, diet, etc.
There are a bunch of preparations the US military would want to take in the face of elevated odds of nuclear war (bombers in the air, ships looking for submarines, changes of force concentration) and I don’t believe they will sacrifice making those preparations for crowd management reasons. I agree it’s possible they’ll say something noncommittal or false while visibly changing force deployments to DEFCON 2 or whatever, though this is not what they did during the Cold War and it would be pretty obvious.
Yep, agree—I think it was warranted to be extremely cautious in February/March, and then the ideal behavior would have been to become much less cautious as more information came in. In practice, I think many people remained extremely cautious for a full year (including my family) out of some combination of inertia and exhaustion about renegotiating what had been strenuously negotiated in the first place.
Some people furthermore tried very aggressively to apply social pressure against fully vaccinated people holding events and returning to normalcy in spring of 2021, which I think was an even more clear-cut mistake given the incredibly high pre-omicron vaccine efficacy. I am not actually sure I know anyone who I believe missed in the incautious direction, and if we’d had equal misses in both directions I’d feel a lot better about our community decisionmaking.
I don’t know, but I think likely days not weeks. Tactical nuke use will be a good test ground for this—do we get advance warning from US officials about that? How much advance warning?
Thanks for this thoughtful reflection. I do want to register that I think I disagree there wouldn’t be much EA to do post- a nuclear exchange between the US and Russia—it would be a scary hard world to live in, and one where many of our previous priorities are no longer relevant, but it’s work I think we could do and could improve the trajectory of civilization by doing.
Though I should say that I think tac nuke use in Ukraine is also a reasonable trigger to leave, depending on your personal situation, productivity, ease of leaving, where you’re going, etc—I really just want people to be sure they are doing the EV calculations and not treating risk-minimization as the sudden controlling priority.
My impression is that US intelligence has been very impressive with regard to Russia’s military plans to date. US officials confidently called the war in Ukraine by December and knew the details of the planned Russian offensive. They’re saying now that they think Putin is not imminently planning to use a tactical nuke. If they’re wrong and Putin uses a tactical nuke next week, that’d be a big update they also won’t predict further nuclear escalation correctly, but my model is that before the use of a tactical nuke, we’ll get US officials saying “we’re worried Russia plans to use a tactical nuke”. If I’m right about that, then I further predict they’ll be giving pretty accurate assessments of whether Russia is going to escalate from there.
That suggests a threshold to leave of [ tactical nuke use in Ukraine, if it surprises US officials] or [after tactical nuke use in Ukraine and a warning from US officials that Putin seems inclined to escalate further after tactical nuke use], which would be a 10x or more further update on risk in my view.
Hmm, what mechanism are you imagining for advantage from getting out of cities before other people? You could have already booked an airbnb/rented a house/etc before the rush, but that’s an argument for booking the airbnb/renting the house, not for living in it.
To be clear, I will also leave SF in the event of a strong signal that we’re on the brink of nuclear war—such as US officials saying they believe Russia is preparing for a first launch, or the US using a nuclear weapon ourselves in response to Russian use, or strategic rather than tactical Russian use (for example against Kyiv), or Russia declaring war on NATO or declaring intent to use nuclear weapons outside Russian territory.
I mostly expect overreaction in cases of a weaker signal such as a Russian “test” on territory Russia claims as Russian, or tactical use, or Russia inducing a meltdown at a nuclear power plant—all of which would be scary, destabilizing, precedent-setting events that dramatically raise the odds of a nuclear war, but which I wouldn’t call a “clear and unambiguous signal that a large amount of the world may be utterly destroyed in a matter of hours”.
In this framework, before the tac nuke use in Ukraine, your expected life hours lost was remaining life hours*P(nuke in your location | nuke in Ukraine) * P (nuke in Ukraine), so your subsequent expected life hours last should change by a factor of 1/P(nuke in ukraine), or about six.
Though I think straightforwardly applying that framework is wrong, because it assumes that if you don’t flee as soon as there’s nuke use in Ukraine, you don’t flee at all even at subsequent stages of escalation; instead, you want P(nuke in your location| nuke in Ukraine and no later signs of danger which prompt you to flee). To figure out your actual expected costs from not fleeing as soon as there’s tactical nuke use in Ukraine, you need to have an estimate of how likely it is that there’d be some warning after the tactical nuke use before a nuclear war started.
I felt a lot of this when I was first getting involved in effective altruism. Two of the things that I think are most important and valuable in the EA mindset—being aware of tradeoffs, and having an acute sense of how much needs to get done in the world and how much is being lost for a lack of resources to fix it—can also make for a pretty intense flavor of guilt and obligation. These days I think of these core elements of an EA mindset as being pieces of mental technology that really would ideally be installed gradually alongside other pieces of mental technology which can support them and mitigate their worst effects and make them part of a full and flourishing life.
Those other pieces of technology, at least for me, are something like:
a conviction that I should, in fact, be aspiring to a full and flourishing life; that any plan which doesn’t viscerally feel like it’ll be a good, satisfying, aspirational life to lead is not ultimately a viable plan; that I may find sources of strength and flourishing outside where I imagined, and that it’d fine if I have to be creative or look harder to find them, but that I cannot and will not make life plans that don’t entail having a good life.
a deep comfort with my own values, some of which are altruistic and some of which are selfish, and with my own failings as a person; the ability to look at myself and see a lot of shortcomings and muddled thinking and mistakes and ways I’ve hurt people and to nonetheless feel love and pride for myself. For me, at least, the reason it hurt to notice I had selfish values was very close to the reason it hurt to notice I’d made a mistake or handled a situation poorly; I had a lot of my self-esteem and my conviction I deserved to be happy and to be loved tied up in high expectations of myself. But of course it’s very damaging to your altruistic endeavors, and to your personal growth, to be unwilling to look at yourself the way you truly are, or to love yourself only for things you won’t always live up to, so I’m actually much stronger and better once I deeply internalized that I am flawed, and that I am selfish, and that I am incoherent and muddled in many ways, and that this is also true of all other humans and we all remain deserving of good lives all the same.
a sense that I am better and a better EA when I’m stronger and happier; that depression and burnout genuinely sap my productivity and my creativity and affect my epistemics; that miserably dragging myself across the finish line actually produces worse results than living a life I take pride in and enjoy deeply; an appreciation for just how much I’m capable of when I’m happy and love my life and love the people around me and love the work I do and don’t have to fight with myself to focus or prioritize.
a healthier relationship to my own motivational system: I used to do a lot of what I think of as ‘dragging my brain across sharp rocks’ to get stuff done. The stuff was aversive; I didn’t want to do it; I hated doing it; I forced myself to do it anyway. This changed how I related to all kinds of tasks, even one that didn’t have to be aversive. I thought of ‘intrinsic motivation’ as basically willpower, the willingness to make myself hurt to get things done. It was hard to imagine doing things out of an uncomplicated, not-internally-coercive interest in making them happen. It took me a long time, and I had the luxury of a home environment and job that made it possible, but I flat-out don’t do that anymore. I do things when I want to do them; when it would take internal coercion and ‘dragging my brain over rocks’ to do things, I don’t do them. (I allow myself to make myself start a thing for a few seconds, to see if it just needed activation energy, but I don’t force myself through things that require ongoing internal making-myself). And it turns out that once I have some trust that doing things won’t be unpleasant and aversive, I do plenty of things, and it’s more achievable to add new things.
For me, this has taken a decade. I don’t think I was particularly good at it, I don’t know that I made all the right tradeoffs in doing it, and I hope it’s faster and better for other people. But I do want people to know that there’s a way of living your values that doesn’t feel fueled by guilt, that it’s possible to be an EA and have a life you just love, and that you should absolutely be aiming to be your strongest and best self rather than the version of yourself who sacrificed the most.