LessWrong/Lightcone Infrastructure
Ruby
Those accusations seem of a dramatically more minor and unrelated nature and don’t update me much at all that allegations of mistreatment of employees are more likely.
I also see that a lot of the issues were predictable from last year’s comments but were not addressed.
This is my fault. I was the lead organizer for Petrov Day this year though wasn’t an organizer in previous years. I recalled that there were issues with ambiguity last year, which I attempted to address (albeit quite unsuccessfully), however, I didn’t go through and read/re-read all of the comments from last year. If I had done so, I might have corrected more of the design.
I’m sorry for the negative experience you had due to the poor design. I do think it’s bad for people to find themselves threatened by social consequences over something they weren’t given proper context for.
If I’m involved in next year’s Petrov Day, I plan on there being consent mechanisms, as you suggest.- 30 Sep 2021 2:11 UTC; 24 points) 's comment on Clarifying the Petrov Day Exercise by (
Hey Shakeel,
Thank you for making the apology, you have my approval for that! I also like your apology on the other thread – your words are hopeful for CEA going in a good direction.
Some feedback/reaction from me that I hope is helpful. In describing your motivation for the FLI comment, you say that it was not to throw FLI under the bus, but because of your fear that some people would think EA is racist, and you wanted to correct that. To me, that is a political motivation, not much different from a PR motivation.
To gesture at the difference (in my ontology) between PR/political motivations and truth-seeking motivations:
PR/politicalyou want people to believe a certain thing (even if it’s something you yourself sincerely believe), in this case, that EA is not racist
it’s about managing impressions and reputations (e.g. EA’s reputation as not racist)
Your initial comment (and also the Bostrom email statement) both struck me as “performative” in how they demonstrated really harsh and absolute condemnation (“absolutely horrifying”, “[no] place in this community”, “recklessly flawed and reprehensible” – granted that you said “if true”, but the tone and other comments seemed to suggest you did think it was true). That tone and manner of speaking as the first thing you say on a topic[1] feels pretty out of place to me within EA, and certainly isn’t what I want in the EA I would design.
Extreme condemnation pattern matches to someone signaling that they too punish the taboo thing (to be clear, I agree that racism should not be tolerated at all), as is seen on the lot of the Internet, and it feels pretty toxic. It feels like it’s coming from a place of needing to demonstrate “I/we are not the bad thing”.
So even if your motivation was “do your bit to make it clear that EA isn’t racist”, that does strike me as still political/PR (even if you sincerely believe it).
(And I don’t mean to doubt your upsetness! It is very reasonable to be upset if you think something will cause harm to others, and harm to the cause you are dedicating yourself to, and harm to your own reputation through association. Upsetness is real and caring about reputation can come from a really good place.)
I could write more on my feelings about PR/political stuff, because my view is not that it’s outright “bad/evil” or anything, more that caution is required.
Truth-seeking / info-propagation
Such comments more focus on sharing the author’s beliefs (not performing them)[2] and explaining how they reached them, e.g. “this is what I think happened, this is why I think that” and inferences they’re making, and what makes sense. They tally uncertainty, and they leave open room for the chance they’re mistaken.
To me, the ideal spirit is “let me add my cognition to the collective so we all arrive at true beliefs” rather than “let me tug the collective beliefs in the direction I believe is correct” or “I need to ensure people believe the correct thing” (and especially not “I need people to believe the correct thing about me”).
My ideal CEA comms strategy would conceive of itself as having the goal of causing people to have accurate beliefs foremost, even when that makes EA look bad. That is the job – not to ensure EA looks good, but to ensure EA is perceived accurately, warts and all.
(And I’m interested in attracting to EA people who can appreciate that large movements have warts and who can tolerate weirdness in beliefs, and gets that movement leaders make mistakes. I want the people who see past that to the ideas and principles that make sense, and the many people (including you, I’d wager) are working very hard to make the world better.)
Encouragement
I don’t want to respond to step in the right direction (a good apology) with something that feels negative, but it feels important to me that this distinction is deeply understood by CEA and EA in general, hence me writing it up for good measure. I hope this is helpful.
ETA: Happy to clarify more here or chat sometime.
I came to the comments here to also comment quickly on Kathy Forth’s unfortunate death and her allegations. I knew her personally (she subletted in my apartment in Australia for 7 months in 2014, but more meaningfully in terms of knowing her, we also we overlapped at Melbourne meetups many times, and knew many mutual people). Like Scott, I believe she was not making true accusations (though I think she genuinely thought they were true).
I would have said more, but will follow Scott’s lead in not sharing more details. Feel free to DM me.
There are a lot of dumb laws. Without saying it was right in this case, I don’t think that’s categorical a big red line.
When I think about being part of the movement or not, I’m not asking whether I feel welcomed, valued, or respected. I want to feel confident that it’s a group of people who have the values, culture, models, beliefs, epistemics, etc that means being part of the group will help me accomplish more of my values than if I didn’t join the group.
Or in other words, I’d rather push uphill to join an unwelcoming group (perhaps very insular) that I have confidence in their ability to do good, than join a group that is all open arms and validation, but I don’t think will get anything done (or get negative things done).
And to be more bold, I think if a group is trying to be very welcoming, they will end up with a lot of members that I am doubtful share my particular nuanced approach to doing good, and with whom I’m skeptical I can build trust and collaborate because our worldviews and assumptions are just too different.
My guess is it was enough time to say which claims you objected to and sketch out the kind of evidence you planned to bring. And Ben judged that your response didn’t indicate you were going to bring anything that would change his mind enough that the info he had was worth sharing. E.g. you seemed to focus on showing that Alice couldn’t be trusted, but Ben felt that this would not refute enough of the other info he had collected / the kinds of refutation (e.g. only a $50 for driving without a license, she brought back illegal substances anyway) were not compelling enough to change that the info was worth sharing.
I do think one can make judgments from the meta info, and 3 hours is enough to get a lot of that.
I consider something of a missing mood on your part to be quite damning. From what I hear and see (Ben’s report of your call with him, how you’re responding public, threat to Lightcone/Ben), you are overwhelmingly concerned with defending yourself and don’t seem contrite at all that people you employed feel so extremely hurt by their time with you. I haven’t heard you dispute their claims of hurt (do you think those are lies for some reason?), instead focusing on the veracity of reasons for being hurt. But do you think you’re causally entangled with them feeling hurt? If so, where is the apology or contrition and horror at yourself that they think being with you resulted in the worst months of their lives?
I’d understand a lack of that if your position was “they’re definitely lying about how they felt probably for motivation X, give us time and can prove that”, but this hasn’t been the nature of your response.
I actually would expect more “competent” uncompassionate people concerned only with their own reputation to have acted contrite, because it’d make the audience more sympathetic, suggesting that you all aren’t very good at modeling people. Which makes it more likely you weren’t modeling your employees experience very well either, perhaps resulting in a lot of harm from negligence more than malice (which still warrants sharing this info about you).
Congratulations!! Marriage between the right people is wonderful.
Miranda Dixon-Luinenburg and I had EA themes throughout our wedding ceremony. You’re welcome to read and borrow from our ceremony text. (Eventually I’ll post the audio recordings too, but they need some significant audio clean up.)
Context: we had our wedding in a planetarium and had our friends write speeches each according to a particular theme combining into an overall arc. Each speech was read while a matching starscape was projected on the dome.
RobertM and I are having a “dialogue”[1] on LessWrong with a lot of focus on whether it was appropriate for this to be posted when it was and with info collected so far (e.g. not waiting for Nonlinear response).
What is the optimal frontier for due diligence?
[Speaking from LessWrong here:] based on our experiments so far, I think there’s a fair amount more work to be done before we’d want to widely roll out a new voting system. Unfortunately for this feature, development is paused while we work on some other stuff.
I think it matters a lot to be precise with claims here. If someone believes that any case of people with power over others asking them to commit crimes is damning, then all we need to establish is that this happened. If it’s understood that whether this was bad depends on the details, then we need to get into the details. Jack’s comment was not precise so it felt important to disambiguate (and make the claim I think is correct).
I think I agree with your clarification and was in fact conflating the mere act of speaking with strong emotion with speaking in a way that felt more like a display. Yeah, I do think it’s a departure from naive truth-seeking.
In practice, I think it is hard, though I do think it is hard for the second order reasons you give and others. Perhaps an ideal is people share strong emotion when they feel it, but in some kind of format/container/manner that doesn’t shut down discussion or get things heated. “NVC” style, perhaps, as you suggest.
I would think you could go through the post and list out 50 bullet points of what you plan to contest in a couple of hours.
Good comment!!
Most ideas for solving problems are bad, so your prior should be that if you have an idea, and it’s not being tried, probably the idea is bad;
A key thing here is to be able to accurately judge whether the idea would be harmful if tried or not. “Prior is bad idea != EV is negative”. If the idea is a random research direction, probably won’t hurt anyone if you try it. On the other hand, for example, certain kinds of community coordination attempts deplete a common resource and interfere with other attempts, so the fact no one else is acting is a reason to hesitate.
Going to people who you think maybe ought to be acting and asking them why they’re not doing a thing is probably a thing that should be encouraged and welcomed? I expect in most cases the answer will be “lack of time” rather than anything more substantial.
Collaborative calendar/schedule for the event is now live! https://docs.google.com/spreadsheets/d/1xUToQ-Wu6w-Uaow7q8Bo5s61beWWRJhIh9P-DNAvx4Q/edit?usp=sharing
Please add any events or activities you’d like to run. Comment here or in the doc if you have questions, e.g . about good places to host your session.
I think this post falls short of arguing compellingly for the conclusion.
It brings 1 positive example of a successful movement that didn’t schism early one, and 2 examples of large movements that did schism and then had trouble.
I don’t think it’s illegitimate to bring suggestive examples vs a system review of movement trajectories, but I think it should be admitted that cherry-picking isn’t hard for three examples.
There’s no effort expended to establish equivalence between EA and its goals and Christianity, Islam, or Atheism at the gears level of what they’re trying to do. I could argue that they’re pretty different.
I seriously do not expect that an EA schism would result in bloodshed for centuries. Instead, it might save thousands of hours spent debating online.
The argument that “EA is too important” proves too much. I could just as easily say that because the stakes are so high, we can’t afford to have a movement containing people with harmful beliefs, and therefore it’s crucial that we schism and focus fresh with people who have True Spirit of EA or whatever.
This is not something I fault this post for not arguing about , but I’m personally inclined to think that “longtermist” EA should not have tried to become a mass movement (which is what the examples described are), and instead should have stayed relatively small and grown extremely slowly. I suspect many people are starting to wonder whether that’s true, and if so, people who want a smaller, more focused, weirder, “extreme” group of people collaborating should withdraw from the people who aspire for a welcoming, broadly palatable mass-movement, and each group will get out each other’s way.
There are historical reasons for why things developed the way that did, but I think it is clear there are some distinct cultural/wordlview clusters in EA that have different models and values, and aren’t united by enough to overcome that. I think that splitting might allow both groups to continue rather than what would likely happen is one group just dissolving, or both groups dissolving except for a core of people who want to argue indefinitely.
What would convince me against splitting is if no, really, everyone here is united very strongly by some underlying core values and world beliefs, and we can make enough progress on the differences en masse. I’m skeptical, but it’s good to say what might convince you.
Hi Larks, thanks for taking the time to engage.
I’m not sure how relevant this is to the EA forum?
I personally think that for Effective Altruists to be effective, they need to be healthy/well-adjusted/flourishing humans and therefore something as crucial as good relationship advice ought to be shared on the EA Forum (much the same productivity, agency or motivation advice).
I didn’t mention it in the post, but part of the impetus for this post came from Julia’s recent Power Dynamics between people in EA post that discusses relationships, and it seemed like collecting broader advice on that would make for a healthier community overall. Mm, that’s a point I’d emphasize – healthy relationships between individuals makes for a healthy community, especially when the individuals are working within and across EA orgs.
My understanding (which could be wrong, and I hope they don’t mind me mentioning it on their behalf) is that the EA Forum dev team is working to build Swapcard functionality into the forum, including the ability to import your Swapcard data.
In the meantime, I agree with the OP.
There’s a user setting that lets you do this.
I think asking your friends to vouch for you is quite possibly okay, but that people should disclose there was a request.
It’s different evidence between “people who know you who saw this felt motivated to share their perspective” vs “people showed up because it was requested”.