I wonder if it would be worth spinning off Community Health into its own org, to decouple it from those assets and put it in a more favorable legal jurisdiction?
Could also help it be more of a trusted neutral party.
I wonder if it would be worth spinning off Community Health into its own org, to decouple it from those assets and put it in a more favorable legal jurisdiction?
Could also help it be more of a trusted neutral party.
Be more willing to call out inappropriate, weird and/or off-putting behavior, and more willing to simply shut down certain types of people without needing to endlessly discuss or justify it. Be more willing to call obvious red flags as red flags.
I think the right approach is something like a legal trial: have a context devoted to figuring out if there was wrongdoing, then determine any punishment, then mete it out.
Concretely, this could look something like: “I propose we set a timer and discuss for X minutes. When it rings, we make a list of possible punishments, with ‘no punishment’ one item on the list, and use anonymized approval voting to determine which punishment is best.” (Note that in our legal system, the severity of punishment is often adjusted based on the amount of remorse displayed by the offender, and that seems good here too.)
Endless discussion seems bad, but assuming guilt or copy/pasting prevailing mainstream social norms also seems bad—prevailing mainstream social norms have often been wrong throughout history, and I doubt the present is an exception. (But Chesterton’s Fence applies—it’s good to try and understand the best arguments available for a mainstream social norm before disregarding it.)
That’s horrifying, sorry to hear about it
Didn’t EA originate as a professional community though specifically in the context of finding effective charities and 80k?
Not in the Bay Area. Polyamory was a big discussion topic on LessWrong as far back as 2011: https://www.lesswrong.com/posts/kLR5H4pbaBjzZxLv6/polyhacking
BTW, I’m wondering if a good heuristic is: If someone makes a lot of accusations, they’re likely to be a liar. If someone receives a lot of accusations, they’re likely to be guilty. The idea being that genuine victimhood (from a crime/false accusation) happens to people at a fairly even background rate, but people who are bad actors tend misbehave more than once.
I’ll by-default post repost the links and guess at identity of the person in-question in 24 hours unless some forum admin objects or someone makes a decent counterargument.
I think the best counterargument would probably be something like: posting links and guessing the identity would deter other survivors from coming forwards. I feel like my model of what deters survivors from coming forwards is pretty bad, and I would want to read the literature on this (hopefully there is a high-quality literature?)
Agreed.
And I hope that Aurora Quinn-Elmore, if this depiction of her is accurate, sees her mediation work dry up.
For what it’s worth, prior to reading this article, I knew Aurora by reputation as someone who was aggressively feminist. I remember having a conversation with a [edit: conservative-leaning] woman at a party who told me something like: “I tried to have a discussion with Aurora about consent, and I wasn’t able to get through to her. You might want to avoid kissing her or anything like that, to stay on the safe side.”
Needless to say, this leaves me feeling fairly confused about what’s actually going on.
I have found that the people I have met in EA are much more open to talking about sex and sexual experiences than I am comfortable with in a professional environment. I have personally had a colleague in EA ask me to go to a sex party to try BDSM sex toys.
I would guess this is a mixture of
Founder effects: Sexuality being a topic of discussion in communities which were precursors to EA. EA didn’t originate as a professional community.
Openness to weird ideas: The idea that buying a $40K car instead of a $30K car means you gave up an opportunity to save a life is pretty weird. The idea that vast numbers of people could exist in the future and our overwhelming moral priority should be to ensure that they’re living happy lives is pretty weird. The idea that shrimp welfare is super important is pretty weird. These are all intense, extraordinary conversation topics. Polls show most people masturbate. Most of us don’t talk about it. But if anyone talks about it, I imagine it’s a person who is comfortable with (or even delights in) intense, extraordinary conversations more generally.
Or just look at the ratio of karma to views/reads. A high karma-to-view ratio suggests a good post with a boring title which deserves more visibility.
It looks like Hacker News uses the comment-to-score ratio for flame-war detection.
We were recently asked about the posts we found most valuable over the course of 2022. I wonder what a machine learning algorithm tasked with predicting “most valuable” status from a few simple features like karma-to-view ratio or upvote/downvote ratio would find. (Presumably, the majority of posts were not marked as “most valuable”, so you’d need a solution to the class imbalance problem—I suggest increasing the weight of posts marked as “most valuable” in the loss function, to reflect the fact that false negatives are costly. Also, you might want to Bayes-adjust your features / have a prior that needs to be overcome, to avoid over-updating on the first few data points which come in regarding a new post.)
The geographic strategy might work for economic development in poverty-stricken geographic regions. It seems plausible to me that this would e.g. help pay for public goods in Kenya that the GiveDirectly approach doesn’t currently do a good job of funding. I wonder if Justin Rosenstein would be interested in running a pilot?
If they moved from the UK to the US, would that help defend against libel/slander lawsuits from Americans?