Clarifying the Petrov Day Exercise
Summary: The Petrov Day Red Button event is on today. I think it would be more successful as a community ritual if it were presented less like a social experiment or game and gave individuals an opportunity to reaffirm their values by choosing to opt in.
Today is Petrov Day and the EA Forum is marking the occasion by taking part in a joint exercise with the LessWrong community. 100 EA Forum users received codes that could shut down the LessWrong homepage for a day, and vice versa, with the aim that everyone would resist temptation and choose to do nothing rather than destroy something.
I was one of the 100 people from the EA Forum trusted with the codes and so I’ve been thinking a lot about the purpose of the exercise (and causing Aaron a lot of stress while doing so!).
This exercise has been described or treated in a few different ways today:
A game: The post describing the exercise says “LessWrong is teaming up with the Forum to play a game of mutual destruction” and that’s certainly what it looks like. When I opened the original email, it looked like I was being invited to a murder mystery! The button on the front page looks interesting and inviting and last year multiple people tried pressing the button and entering codes just out of curiosity.
A social experiment: The exercise is sometimes treated as a way to learn about game theory and nuclear warfare. For example, last year’s post-mortem talks about how “well-intentioned does not means secure” and draws analogy to larger security issues in the world, and the author says that the exercise changed how he thinks about the Cold War.
A community-building ritual: Ben Pace from Less Wrong wrote last year, “We’ll return next Petrov Day, for another round of sitting around and not pressing destructive buttons, alongside our ceremonies and celebrations.” Aaron Gertler wrote on the EA Forum about how this exercise is a way of celebrating not destroying the world. Several users from both forums have told me that these exercises are meaningful to them, that it feels important to coordinate to preserve value in memory of Petrov and others like him.
I believe that because the purpose of the exercise is unclear (or because it’s trying to be all three things at once), the exercise is less successful than it could be.
If we want the Petrov Day Red Button to be a game, we could make it way more fun! First and foremost, we should make it clear that the stakes are low (a homepage would go down for a few hours, all your favourite links would still be up, no one will get hurt). If this is a game, we should disentangle real-life ethics about truthfulness and cooperation from gameplay and strategy like most people would in a game of mafia or werewolf. We could also make it bigger and more inclusive.
This morning I made some light-hearted comments on Twitter about the Red Button exercise. A few good people who were genuinely concerned about me mentioned to me privately that if I don’t take the Red Button exercise, there could be social consequences, or it might even lead to me missing out on job opportunities in the EA Community. I really appreciate those people for their compassionate warnings, but I definitely don’t see this exercise as fun anymore.
If we want the Petrov Day Red Button exercise to be a social experiment or learning experience, we need a better experimental design. There are experts on war-gaming scenarios who could definitely help. If it’s a social experiment, it should also be opt-in, so that people should give their consent to be studied in this way. Honestly, though, I don’t think the type of people involved here or the scenario have enough similarities to a nuclear war that this would ever provide truly useful data as a social experiment.
If we want the Petrov Day Red Button exercise to be a community ritual, to help community members to build trust with each other and re-affirm their values, it again needs to be something people opt in to. I would love to see people choosing to take part as a way of reminding themselves that they are choosing a life of courage and wisdom and rationality. I think it could be really moving.
At present, however, participants aren’t given any way of opting out without deeply hurting their friends, regardless of whether this ritual reflects their values, which undermines the value of the ritual. I can imagine the Petrov Day Red Button exercise as being as meaningful for some longtermists as the Eucharist is for some Christians—but that level of meaning can only be achieved if everyone has a free choice to take part or not.
I believe that Aaron, Ben and Ruby want the Petrov Day Red Button exercise to be a way for people to show their commitment to certain principles. In order for that to happen, people have to be given the free choice to exercise those principles. My experience today has been of several people asking me to conform to their expectations because otherwise I’ll be socially punished—that has basically no application to a nuclear war and seems pretty toxic to a community building situation.
I haven’t decided yet if I’ll push the button, but if I do, it will be on behalf of anyone who would like to push it but can’t because they work (or want to work) for an EA organisation and can’t take the social risk. In my view, Petrov Day is about being willing to do what we think is most valuable instead of following orders, and I’d love to see a Petrov Day that makes more space for independent choice.
Hat tip to this comment for helping me understand the different ways this event can be understood.
- Petrov Day Retrospective: 2021 by 21 Oct 2021 10:12 UTC; 54 points) (
- How would you run the Petrov Day game? by 26 Sep 2021 23:37 UTC; 17 points) (
- 27 Sep 2021 11:58 UTC; 5 points) 's comment on Honoring Petrov Day on the EA Forum: 2021 by (
I strongly agree with all this. Another downside I’ve felt from this exercise is it feels like I’ve been dragged into a community ritual I’m not really a fan of where my options are a) tacitly support (even if it is just deleting the email where I got the codes with a flicker of irritation) b) an ostentatious and disproportionate show of disapproval.
I generally think EA- and longtermist- land could benefit from more ‘professional distance’: that folks can contribute to these things without having to adopt an identity or community that steadily metastasises over the rest of their life—with at-best-murky EV to both themselves and the ‘cause’. I also think particular attempts at ritual often feel kitsch and prone to bathos: I imagine my feelings towards the ‘big red button’ at the top of the site might be similar to how many Christians react to some of their brethren ‘reenacting’ the crucifixion themselves.
But hey, I’m (thankfully) not the one carrying down the stone tablets of community norms from the point of view of the universe here—to each their own. Alas this restraint is not universal, as this is becoming a (capital C) Community ritual, where ‘success’ or ‘failure’ is taken to be important (at least by some) not only for those who do or don’t hit the button, but corporate praxis generally.
As someone who is already ambivalent, it rankles that my inaction will be taken as tacit support for some after-action pean to some sticky-back-plastic icon of ‘who we are as a Community’. Yet although ‘protesting’ by ‴″nuking‴″ [sic] ([sic]) LW has some benefit of a) probably won’t get opted in again and b) maybe make this less likely to be an ongoing ‘thing’, it has some downsides. I’m less worried about ‘losing rep’ (I have more than enough of both e-clout and ego to make counter-signalling an attractive proposition; ‴″nuking‴″ LW in a fit of ‘take this and shove it’ pique is pretty on-brand for me), but more that some people take this (very) seriously and would be sad if this self-imposed risk is realised. Even I disagree (and think this is borderline infantile), protesting in this way feels a bit like trying to refute a child’s belief their beloved toy is sapient by destroying it in front of them.
I guess we can all be thankful ‘writing asperous forum comments’ provides a means of de-escalation.
So to summarise, you seem not to be a fan of the ritual? :P
I’ve been accused of many things in my time, but inarticulate is a new one. ;)
You do use lots of big words
I am curious: if you believe professional distance is a good thing for EA, then what is your explanation for EA not being a solved problem already? The existing networks of professionals had all the analytical knowledge required; save for the people who joined up straight out of college pretty much everyone currently working in EA was already in those networks.
There seems to be some premise missing in this argument.
To me, it seems that the question whether professional distance is good or not is mostly orthogonal to the question why EA isn’t a solved problem already.
I’m not on LW, so this was the first time I’d heard of this ritual. As designed, it appears to emphasize the precise opposite values that we honor in Petrov:
Petrov famously did not retaliate (according to the information he had). In this game, a co-lead of an EA org publicly pledges to retaliate.
Petrov chose to distrust the systems around him. This game emphasizes a forced choice of trust .
Petrov defied his in-group and suffered professional and social costs, which we honor. Here, defying the in-group would also mean professional and social costs, but are wielded as a threat.
Petrov’s choices had enormous positive externalities. Here, the game emphasizes the insular desires of one community.
Additionally, I do believe that these two websites have value to a few thousand people. But it needs to be said that being offline for several hours is not nearly comparable to the lives of millions of people. Yet that comparison was made often. Stating that they are symbolically the same risks devaluing the reputation of the community here.
I’ll add this to Nathan’s other post, but other ways to celebrate Petrov might be:
write an Opinion piece for a major paper about him (the Washington Post one is over 20 years old! could use an update);
organize a Giving Day;
create a coordinated social media campaign (there was one viral tweet yesterday about Petrov which was cool)
Research other people who’ve had similar impact, but are still unknown to the world (h/t to Kirsten who mentioned this on Twitter a while back)
I appreciate the work of those who organized the game yesterday, and the willingness to listen to this feedback.
There’s another important point that the structure of this game is just very different from what Petrov faced.
The game Petrov was in looked like a chain of decision points, where any individual actor or group of actors could have chosen to de-escalate
US could have backed down earlier, causing background rates to be lower → mechanical detection of nukes could have not malfunctioned → Petrov could choose or not choose to forward this message to superiors → Soviet superior officers could choose to pass or not pass this message upwards → Soviet missile command could choose to fire or not fire their missiles at America → The Americans could choose to retaliate or not retaliate, thus sparing the Eastern hemisphere.
What we faced looked like:
Any 1 of 100 people could choose to nuke LW → Any 1 of 100 people could choose to nuke EAF.
Here, it’s clear that “nuking” is the unilateral action, whereas in Petrov’s case, not nuking is most simply read as the more unilateral action.
Small note:
This seems like a rather odd objection to me, like saying that abstract prisoner’s dilemmas in game theory classes are bad because millions of actual prisoners suffer greatly under the unfair criminal justice system. Metaphors and analogies are often useful for humans to think and reason about the world.
I agree with the sentiment here. I am confused about how some people are taking this super seriously and some people are not, and I feel distracted by being worried about offending people over this ritual. I’d love to play a game, play in a social experiment, or observe a ritual, but I agree it would be more fun to know which. Right now, this is not as fun as it could be.
Context: I have both LW and EA Forum launch codes. I never opted-in.
Hey, I got an email with a code and then I entered it.
What does it do? Is there a prize?
lol good fake
:O
It seems fairly likely (25%) to me that had Kirsten not started this discussion (on Twitter) I would have pushed the button because:
actually preventing the destruction of the world is important to me.
doing so, especially as a “trusted community member”, would hammer home the danger of well intentioned unilateralists in the way an essay can’t, and I think that idea is important.
despite being aware of lesswrong and having co-authored one post there, I didn’t really understand how seriously some people took the game previously.
worse, I was in the dangerous position of having heard enough about Petrov day to, when I read the email, think “oh yeah I basically know what this is about”, and therefore not read the announcement post.
I decided not to launch, but this was primarily because it became apparent through this discussion how socially costly it would be. I find people being angry with me on the internet unusually hard, and expect that pushing the button using the reasoning above could quite easily have cost me a significant amount of productive work (my median is ~ 1 week).
Regardless of what you think of the unilateralist’s curse, I think Petrov Day is a uniquely bad time to lambast the foibles of being a well-intentioned unilateralist.
This wasn’t intended as a “you should have felt sorry for me if I’d done a unilateralist thing without thinking”. It was intended as a way of giving more information about the probability of unilateralist action than people would otherwise have had, which seems well within the spirit of the day.
I also think it’s noteworthy that in the situation being celebrated the ability to resist social pressure was pointing in the opposite direction to the way it goes here, which seems like a problem with the current structure, but I didn’t end up finding a good way to articulate it, and someone else said something similar already.
I think either you misinterpreted my comment or I misinterpreted yours.
I’m genuinely confused how you could have gotten that interpretation from my comment.
So to be clearer, Petrov here should naively be read as a well-intentioned unilateralist. He happened to be right, and reasonable people can disagree about whether he was wrong but lucky or right all along. Regardless, I think it’s not very much in the spirit of the day to talk about or act out all the harms of being a well-intentioned unilateralist, though if you wish to do so, more power to you.
I agree, and have complained about this before. I’m also complaining about it now, in case that was not previously clear.
I interpreted your comment as saying that I was “lambasting the foibles of being a well intentioned unilateralist”, and that I should not be doing so. If that was not the intent I’m glad.
I interpreted you as proposing doing an unilateralist action to demonstrate to others the harm of unilateralist actions. Apologies if I misread.
It seems to me that either the decision to push the button is net negative, and you shouldn’t do it, or it isn’t, and if you do it people should learn the lesson “people in my community will do helpful net-positive things”. There’s something strange about the reasoning of “if I do X, people will realize that people do things like X for reasons like Y, even tho I would not be doing it for reasons like Y” (compare e.g. “I will lie about Santa to my child because that will teach them that other people in the world aren’t careful about only communicating true things”, which I am similarly suspicious of).
Yeah I guess you could read what I’m saying as that I actually think I should have pressed it for these reasons, but my moral conviction is not strong enough to have borne the social cost of doing so.
One read of that is that the community is strong enough in its social pressure to quiet bad actors like me from doing stupid harmful stuff we think is right.
Another is that social pressure is often enough to stop people from doing the right thing, and that we should be extra grateful to Petrov, and others in similar situations, because of this.
Either reading seems reasonable to discuss today.
But if you actually should press the button, and do so because you correctly understand why you should, then people shouldn’t learn the lesson “people will do wild crazy stuff out of misunderstandings or malice”, because that won’t be what happened.
The lesson people I would want people to learn is “I might not have considered all the reasons people might do stuff”. See comment below.
Perhaps the idea is that it should be a symbolic reminder that trusted community members could do bad things, rather than evidence for that proposition?
This is closer, I think the framing I might have had in mind is closer to:
people underestimate the probability of tail risks.
I think one of the reasons why is that they don’t appreciate the size of the space of unknown unknowns (which in this case includes people pushing the button for reasons like this).
causing them to see something from the unknown unknown space is therefore useful.
I think last year’s phishing incident was actually a reasonable example of this. I don’t think many people would have put sufficiently high probability on it happening, even given the button getting pressed.
As an update, I’ve decided not to push the button this year.
However I tentatively believe that this event in its current form is net negative for community health. I also see that a lot of the issues were predictable from last year’s comments but were not addressed.
For that reason, if next year’s event hasn’t been changed enough to be positive for community health (ie clear purpose, shared understanding around social pressure, opt-in if it could have any impact on people’s friendships) I am pre-committing to pressing the button. If I’m not invited I’ll encourage someone else to press it.
If you make it opt-in, please do feel free to invite me—really! There are a lot of versions of this that I think I’d enjoy and I promise I’m not an entirely uncooperative person
This is my fault. I was the lead organizer for Petrov Day this year though wasn’t an organizer in previous years. I recalled that there were issues with ambiguity last year, which I attempted to address (albeit quite unsuccessfully), however, I didn’t go through and read/re-read all of the comments from last year. If I had done so, I might have corrected more of the design.
I’m sorry for the negative experience you had due to the poor design. I do think it’s bad for people to find themselves threatened by social consequences over something they weren’t given proper context for.
If I’m involved in next year’s Petrov Day, I plan on there being consent mechanisms, as you suggest.
Thanks Ruby!
Thanks for writing this! I personally am a pretty big fan of the idea of this Petrov Day celebration, and have gotten joy from previous years, but mostly within the frame of the game or prompt for interesting discussions and explorations of norms. To me, this doesn’t feel like a community ritual, though it clearly does to some. Though I could imagine getting value and connection from an alternate version of this that did feel like a real ritual.
But I strongly agree that not asking people for consent/to opt-in is clearly bad here, and significantly under-cuts a lot of the possible value. In particular, I just do not think a community ritual works without asking people to buy in to it. And I feel pretty uncomfortable that people are entered into a game that may have real social consequences to them or give them opportunity to upset other people, without really having context on this—in particular, what happened at LessWrong last year seems terrible.
Yes, looking at the aftermath of last year is pretty grim. https://www.lesswrong.com/posts/K7jrkyKArvxJ224GD/on-destroying-the-world
Epistemic status: I’m still pretty confused about this whole thing, and wasn’t involved in ideation. (Though I helped when asked by choosing whom to email, editing the LW email , and making a post.)
I’ll second Ruby in taking some blame for the overly serious/sanctimonious tone of the email, which I made some edits to before both emails were sent. I didn’t make it more serious, but neither did I make it less serious, even though this whole practice feels more like a “fun social game” than “serious community ritual” to me.
To be clear, I don’t know whether the aforementioned “social consequences” would exist. I wouldn’t plan to think about the results of Petrov Day if I were considering someone for a job, or in any other context where I had any say in someone’s EA community experience. And maybe everyone else is on the same page with me — judging by the comments here, it feels like the Forum’s users generally don’t take the activity seriously in its current form.
But if someone out there does plan to “impose consequences” in some form, and states that desire, I think they risk damaging our ability to do fun community things that aren’t also tied to future professional success.*
In the future, I don’t want this event to be associated with social pressure or coercion, however incidentally. I’ve also never been enthusiastic about it in its current form, and if I’m still working on the Forum this time next year, I hope I’ll be helping to produce something different for the holiday.
*Though fun community things can still be meaningful! The best discussion I’ve seen on different ways to interpret the activity, and the balance between “serious and fun”, is here.
Habryka’s reply is a good representation of what seems valuable about the activity to me. But I wouldn’t want to follow him in “losing trust” in anyone who pressed the button. (As long as they had a good reason; if the reason is “I never liked the Forum much and I wanted to watch it burn”, that’s obviously a bit different.)
I agree with pretty much everything except:
It seems like anyone who received codes who didn’t want to participate could easily just ignore the email and not participate without any negative consequence. I don’t understand why you think opting out would lead to one’s friends being hurt.
Ah yes. I don’t consider ignoring the email to be opting out. As soon as I’ve read the email, inaction is one of my two options. If I delete the email, there will be a post that says something like, “We won! Everyone we emailed decided to cooperate!” even though I didn’t choose to cooperate, I wasn’t even sure if I wanted to be involved in the first place.
If you had to opt-in, not opting in would be one of your two options. If you don’t opt-in, there would be a post that says something like “We won! Everyone who could opt in decided to cooperate!”
I think there’s an important difference between ’100 people opted-in to our community ritual, and all successfully coordinated’ and 200, but where 100 paid attention and 100 totally ignored the email. I don’t feel any notion of trust or coordination from people ignoring an email, or just not being interested.
That’s genuinely fine with me, I hope all the people who decide to embrace this shared ritual find it meaningful and fulfilling :)
Edit: Also then I would have 3 options, opting out, opting in and cooperating or opting in and defecting, which was exactly my point? Here the only way to signal that I’m not interested is by blowing up LessWrong.
It seems like you are successfully signalling your lack of interest without blowing up LessWrong!
I think it would read something closer to “We won! Everyone who opted in decided to cooperate!”
I agree with you, although someone might still opt in treating it like a game and not initially taking it as seriously as others in the community are, and then take the site down. Last year, a user was manipulated into taking down LW by someone claiming the user had to enter their code to save LW.
I like the idea of having an obvious opt-in form into the game/ritual that makes it clearer that you understand how people might react to how you use the code. Maybe just something simple like having to click through two Google Docs in order to actually get your code, or a small Google Form that has the code on its confirmation screen. I really like the overall ritual and game, but also don’t want people to feel like they are tacitly signaling approval. The form could also have a simple checkbox that says “I don’t want to see my codes, and I don’t want to be sent codes in future years”, which we could take into account when deciding on who to send codes to in future years.
I think this would be most meaningful if opting out resulted in another person being sent codes instead. (Or a larger pool of people was invited to opt in and then a random subset of those who did were given codes.)
[I don’t really like the ritual at the moment but I like hill-climbing towards finding better versions of it]
I like that idea, and I think it would make me feel much more on board with the idea of the ritual!
Seems good to me too.
Thanks for writing this, and I agree with your take that it’s toxic when people find out after starting to engage that they may face serious consequences for not taking it seriously enough (and indeed whether or not they actually would, since it’s still unsettling to believe it). I’m sorry that this has been your experience.
If anyone wants to suggest what they’d like to see from Petrov Day, I’ve written a question here.
I thought it was explicit in the announcement post that we should take this seriously, but in not the e-mail I got:
FWIW, I made a joke comment on the EA Forum announcement post shortly after I got my code, and then deleted it (without a trace) a few minutes after, maybe after more carefully reading the announcement post. The joke was that I’d try to make the average predictions in this comment as wrong as possible (and I made it clear that I was only joking after an edit). I was indeed worried that it would hurt my reputation and the reputation of those in EA I associate with, and wanted to see what tone things would take first.
Thanks for writing this. I’ve been thinking about this a lot today.
Another purpose could be raising awareness of nuclear risk and unilateralism. And that could either be as a game or as a community building ritual. But I agree it should be clear which of those it is.
My guess is that is meant to be a community building exercise but to add some jeopardy, it’s been gamified. If it were opt in, I reckon there would be little risk of the button being pressed, which would be less interesting. As Khorton says, this is pretty confusing and potentially harmful for those who misread the situation.
Based on how others have been warning you, it feels like the kind of psychological/social experiment you would need to have a psychological debriefing after to get ethics approval, and even then, still might not get approval.
(I downvoted this comment because I think the degree of ethics approvals needed for certain classes of science experiments is immorally high under some reasonable assumptions, and the EAF should not endorse arguments coming out of status quo bias. It’s also reasonably possible that I would not have downvoted if Michael wasn’t a coworker)
Thanks for explaining.
I agree that the standards can be too high, especially when participants are both fully informed and give consent (e.g. COVID vaccine trials). I think in this case, participants were not properly informed of the potential (community, social and career) risks ahead of time and made sure they understood before participating.
When I wrote my comment, I actually had in mind the Stanford prison experiment, the Milgram experiment and Vsauce convincing people they were in a real trolley problem (this one had debriefing but not informed consent, and honestly, I feel pretty bad for some of the participants), although I’d guess these were much more likely to cause psychological trauma. I actually don’t really know what the current standards are like.
I agree that the Stanford Prison experiment and actually convincing people they are in a trolley problem is not reasonable scientific ethics and can reasonably be expected to traumatize people.
I think additionally both Milgram and SPE additionally have significant validity issues which calls into question their scientific usefulness.
I think the Stanford prison experiment and the Milgram experiment should not be lumped together, as if their validity issues or scientific usefulness was comparable. The former has never been successfully replicated and there is essentially nothing to be learned from it (other than sociological lessons related to how a seriously flawed study can be taken seriously, and be disseminated widely, for decades). By contrast, the Milgram experiment has been replicated multiple times in a wide range of settings, and when the totality of this evidence is considered it seems hard to deny that it uncovered something true, interesting and important about human psychology, even if one can point to some methodological problems in some of the studies.
At least this is my understanding as a complete amateur. Feel free to correct me if you think this assessment is inaccurate.
Thanks, I stand corrected pending further review
Yes, I agree