How do you feel retrospectively about the change to lower community engagement on the EA forum? Do you have any numbers about activity/engagement since then?
Thanks! It’s okay. This is a very touchy subject and I wrote a strongly opinionated piece so I’m not surprised. I appreciate it.
I’m going to go against the grain here, and explain how I truly feel about this sort of AI safety messaging.
As others have pointed out, fearmongering on this scale is absolutely insane to those who don’t have a high probability of doom. Worse, Elizier is calling for literal nuclear strikes and great power war to stop a threat that isn’t even provably real! Most AI researchers do not share his views, neither do I.
I want to publicly state that pushing this maximized narrative about AI x risk will lead to terrorist actions against GPU clusters or individuals involved in AI. These types of acts follow from the intense beliefs of those who agree with Elizier, and have a doomsday cult style of thought.
Not only will that sort of behavior discredit AI safety and potentially EA entirely, it could hand the future to other actors or cause governments to lock down AI for themselves, making outcomes far worse.
I strongly disagree with sharing this outside rationalist/EA circles, especially if people don’t know much about AI safety or x risk. I think this could drastically shift someone’s opinion on Effective Altruism if they’re new to the idea.
Hey man, I respect that. Clearly people like your post so keep it up, just my personal preference.
Like I said I absolutely agree with your points here.
Strongly agree with the premise but not a fan of your writing style here. If you could define “smart” and “wise” better, and maybe rely less on personal anecdotes, I think this post might be more persuasive overall.
Thanks for the thought our response! I suppose the main difference is that we have very diverging ideas of what the EA community is and what it will/should become.
I’ve been on the fringe of EA for years, just learning about concepts and donating but never been part of the tighter group so to speak. I see EA as a question—how do we do the most good with the resources available?
Poly is definitely something historically related to the early movement, but I guess I just disagree that the trade off of reputation and attacks over sexual harassment issues etc are positive because of vague notions of fun.
Also—if the EA community creates massive burnout maybe we should change the way we approach our communications and epistemics instead of accepting that and saying we’ll cope by having casual sex. That doesn’t seem like a good road to go down especially long term.
Then again I don’t have short AI timelines.
I’m concerned that less than 90% of the AI safety community would agree. I have heard some disturbing anecdotes.
Thanks for posting this retrospective! I’m curious about the quiz after fellowships—is that available anywhere?
True…. But as soon as the wrong group catches wind of this, it could turn into a powerful meme to demonize this sort of thinking.
In the spirit of weirdness points, it may be better not to be too blatant about the fringe of animal welfare arguments until public consensus has shifted farther. Perhaps I’m being too pessimistic, full disclosure I do not find insect welfare a compelling line of reasoning.
I hate to be a pessimist, but have you thought about branding this differently? I’d argue a less open approach with some sort of acronym that has a little obfuscate would be better.
It’s literally a meme among many anti intellectual circles to say “I will not eat the bugs”—the idea being technical/liberal/Silicon Valley folks want to force normal people to eat bugs. Obviously this type of thinking is hyperbolic and ridiculous, but I worry that naming this org the Insect Institute creates a needlessly large attack surface for animal welfare, and by extension EA.
is not of type
when binding #:USER-ID162
Thanks for clarifying. :) Sorry to derail.
Hah, fair point. I guess I just am hoping to drive more discussion about this type of thing on the forum and it is definitely frustrating to see how broken up conversations are.
I am also pushing to promote things quicker and not get delayed in drafting forever. I did that for a while and basically never posted anything—I wish more people would be willing to post things on the community side that aren’t extremely high quality and polished.
Thanks for your input! From my perspective as one of the people talking about “establishing norms,” I can definitely do better to add nuance here.
As others have mentioned the issue is the blended lines between the professional roles and personal community is tough to navigate. I generally am in favor of classical liberal approaches to personal morality, but I do think that, as Jeff points out, these sorts of things can be other people’s business in a professional setting.
Imagine you rely on an EA grant for your job/livelihood, or work at an EA organization. You work extremely hard but don’t manage to get you grant renewed or get fired. Then you find out that someone else in a similar role slept with a grantmaker or a grantmaker’s friend, and won the grant over you.
How would that make you feel? Would you still feel it isn’t your business?
Just a note, but if you’re trying to facilitate object-level discussion it might be better not to drop this right as Ozzie dropped a very similar post? https://forum.effectivealtruism.org/posts/hAHNtAYLidmSJK7bs/who-is-uncomfortable-critiquing-who-around-ea
Thanks for pointing out the nuance here, and I strongly agree with your points. These sorts of issues are as you said extremely corrosive, and make it difficult for outsiders not to have a frustrated and bitter view of the elite in EA.
Also as has been pointing out, this type of casual disregard of any sort of restriction on relationships disproportionately drives away women, since so many people in powerful positions in EA are men.
Finally, the optics are awful. I don’t have a strong stance on wokeism per se, but if we get more and more scandals like the TIME article I worry about EAs ability to grow and convince large decision makers. Reputation matters.
Is there a collected list of classic forum posts anywhere? I’d like to check it out.
Thanks for all y’all do—sharing stats publicly like this is really helpful.
Any plans for more legible / objective criteria on who is accepted versus rejected?
Also has your team done any reflection on calls to open ea global or create a more community focused, less gated conference?
On another note, this is an extremely good point I haven’t seen brought up much:
“ Of course, those with power can still take action behind the scenes. This combination (has trouble publicly responding, but can secretly respond in powerful ways) is catastrophic for trust building.”
The asymmetrical relationship here is a huge piece of the puzzle I was missing. Thanks for helping me update on this particular issue.