I like the community section change more than I expected to. I especially like that it only displays 3 posts. It feels like my attention is more where I want it and that feels good.
Quadratic Reciprocity
I don’t have thoughts on that, just being nitpicky since the original framing was “EA-branded event” :)
Re 1: it wasn’t really an EA-branded event though, I think.
While I don’t like this post, I think someone should be writing a more detailed post along these lines to provide more context for people outside of Anthropic. It feels like many newer people in AI safety have positive feelings about Anthropic by default because of its association with EA and a post that causes people to think some more about it could be good.
My defense of posting pseudonymously:
It does feel like I’m defecting a little bit by using a pseudonymous account. I do feel like I’m somewhat intentionally trying to inject my views while getting away with not paying the reputational cost of having them.My comments use fewer caveats than they would if I were posting under my real name, and I’m more likely to blurt things I currently think without spending lots of time thinking about how correct I am. I also feel under little obligation to signal thoughtfulness and niceness when not writing using my name. Plausibly this contributes to lowering the quality of the EA forum but I have found it helpful as a (possibly temporary?) measure to practise posting anything at all. I think that I have experiences/opinions that I want others on the EA forum to know about but don’t want to do the complicated calculation to figure out if it is worth posting them under my real name (where a significant part of the complicated calculation is non-EA people coming across them while searching for me on the internet).
I also prefer the situation where people in the EA community can anonymously share their controversial views over the situation where they don’t say anything at all because it makes it easier to get a more accurate pulse of the movement. I mostly have spent lots of time in social groups where saying things I think are true would have been bad for me and I do notice that it did cause my thinking to be a bit stunted as I avoid thinking thoughts that would be bad to share. Writing pseudonymously feels helpful for noticing that problem and practising thinking more freely.
Also idk pseudonyms are really fun to use, I like the semi-secret identity aspect of using them.
Adding to the appreciation subthread:
Oliver Habryka’s comments here and on Twitter, the ones honestly just defending his beliefs even when controversial have made me feel safer and more comfortable in EA. It’s wholesome
Also I think I probably use the word “integrity” a lot more now because of him lol
Like I think he has made an actual difference in me being more honest and more willing to share my actual beliefs with others.
I appreciate the amount of detail you go into in your comments.
As a woman / “junior EA” / recent “EA student”, I do feel some amount of wariness around my dating choices being policed/restricted out of a desire to protect me and think there has to be a bar for when that seems appropriate—having rules against bosses/professors starting romantic relationships with current employees/students is above that bar but I currently think many potential situations of senior EAs dating more junior people in their field they interact with in social contexts (or students who are not much younger) would not be above that bar.
It feels like there are two problems here (which overlap):
EAs who have some type of influence using that in bad ways to harm the careers / social reputation of people who have rejected them. To the extent that this is a problem, I think more explicitness helps, both in stating conflicts of interest and in propositioning people
More junior EAs feeling pressured into saying yes or not calling out bad behaviour because they think the above could happen, regardless of whether it actually could. This is affected by junior EAs feeling uncertain about what kinds of influence people have within EA and what conflict of interest policies are like
I have encountered abuses of power in romantic relationships outside of EA settings that seemed to me to be exacerbated by things feeling shady such that people felt like they couldn’t be explicit enough and had to navigate plausible deniability. I am much more comfortable in situations where people can express their interest in someone openly if they do intend to start a romantic/sexual relationship and can be clear about their past relationships, as this makes it easier to detect potential abuses of power. I think power abuses happen more in situations where there’s lots of fuzziness.
I appreciate the shortform feature on the EA forum and LessWrong. I feel a bit insecure about my writing style and it feels a bit intimidating to post on the internet. Writing shortform posts has been helpful for getting more comfortable sharing my thoughts on the forum.
I also feel very appreciative towards various parts of the EA/EA-adjacent community I have interacted a lot with socially. They have been instrumental in helping me form close friendships with people I vibe a lot with and my social life would feel less complete if I hadn’t encountered EA. Some of the people I’ve met have been very inspirational and have motivated me to be more principled, thoughtful, and ambitious.
I like EA ideas, I think my sanely trying to solve the biggest problems is a good thing. I am less sure about the current EA movement, partly because of the track record of the movement so far and partly because of intuitions that movements that are as into gaining influence and recruiting more people will go off track and it doesn’t to me look like there’s enough being done to preserve people’s sanity and get them to think clearly in the face of the mind-warping effects of the movement.
I think it could both be true that we need a healthy EA (or longtermist) movement to make it through this century and that the current EA movement ends up causing more harm than good. Just to be clear, I currently think that in the current trajectory, the EA movement will end up being net good but I am not super confident in this.
Also, sorry my answer is mostly just coming from thinking about AI x-risk stuff rather than EA as a whole.
Fairs. I think in FTX worlds, it should actually be in fact harder to get people who strongly dislike fraud to get on board with EA and in Bostrom email worlds, it should actually be in fact harder to get people who strongly dislike the apology to get on board with EA. And that this difficulty, to the extent we care about people turned off by either event having favourable opinion of EA, is actually right and just.
I guess I make comments like the one I made above because I think fewer people doing EA community building are seriously considering that the actual impact (and expected impact) of the EA movement could be net negative. It might not be, and I’m leaning towards it being positive but I think it is a serious possibility that EA movement causes more harm than good overall, for example via having sped up AI timelines due to DeepMind/OpenAI/Anthropic and a few of the EA community members committing one of the biggest frauds ever. Or more vague things like EAs fuck up cause prioritisation, maximise really hard, and can’t course correct later.
The reason why EA movement could end up being not net harmful is when we are ambitious but prioritise being correct and having good epistemics really hard. This is not the vibe I get when I talk to many community builders. A lot of them seem happy with “make more EAs is good” and forget that the mechanism for EA being positively impactful relies pretty heavily on our ability to steer correctly. I think they’ve decided too quickly that “EA movement good therefore I must protect and grow it”. I think EA ideas are really good, less sure about the movement.
In the past, comments like Tegan’s post have been useful for getting me to apply to things.
Another thing that I liked was when a job post had a comment at the bottom saying that women and minorities are less likely to feel qualified to apply to things, mentioning something along the lines of this https://hbr.org/2014/08/why-women-dont-apply-for-jobs-unless-theyre-100-qualified and encouraging them to apply.
Just conjecturing here but I think one reason the ratio is worse than it has to be is perhaps because EA jobs are usually somewhat atypical so it is difficult to figure out if you’re actually qualified (compared to more “normal” jobs) which makes people who have a tendency to feel underqualified even less likely to apply. Plus because the community is small and a lot of people hear about opportunities / get encouraged to apply to things via people in their social networks, and people are less likely to have these social connections if they are from a currently underrepresented group.
I also think mentorship programs are helpful. One-off or series of calls with a more experienced person are helpful for promising people (regardless of gender or other demographics) who don’t have friends or social connections already in a field to figure out how to enter it.
I had a negative reaction to the post but felt hesitant to reply because of the emotional content. It does suck what the OP is experiencing—I think they (and others) could make less of their identity be about the EA movement and that this would be a good thing. I don’t like that ‘small-scale EA community builders’ are having to apologise for things others into EA have done or having to spend time figuring out how to react to EA drama. That does seem like a waste of time and emotional energy, and also unnecessary.
I would appreciate something like a (pinned?) megathread for the topic and restricting discussion of the drama to that post
Edit: I think the current approach of basically doing that and downgrading everything else on topic to personal blog makes sense.
I hope more people, especially EA community builders, take some time to reevaluate the value of growing the EA movement and EA community building. Seems like a lot of community builders are acting as if “making more EAs” is good for its own sake. I’m much less sure about the value of growing the EA community building and more uncertain about whether it is positive at all. Seems like a lot of people are having to put in energy to do PR, make EA look good, fight fires in the community when their time could be better spent directly focusing on how to solve the big problems.
But I also think directly focusing on how to solve the big problems is difficult and “get more people into EA and maybe some of them will know how to make progress” feels like an easy way out.
I haven’t thought about this much. I am just reporting that some people I briefly talked to thought EA was mainly that and had a negative opinion of it.
I’ll list some criticisms of EA that I heard, prior to FTX, from friends/acquaintances who I respect (which doesn’t mean that I think all of these critiques are good). I am paraphrasing a lot so might be misrepresenting some of them.
Some folks in EA are a bit too pushy to get new people to engage more. This was from a person who thought of doing good primarily in terms of their contracts with other people, supporting people in their local community, and increasing cooperation and coordination in their social groups. They also cared about helping people globally (donated some of their income to global health charities + were vegetarian) but felt like it wasn’t the only thing they cared about. They felt like often in their interactions with EAs, the other person would try to bring up the same thought experiments they had already heard in order to get rid of their “bias towards helping people close to them in space-time”. This was annoying for them. They also came from a background in law and found the emphasis on AI safety offputting because they didn’t have the technical knowledge to form an opinion on it and the arguments were often presented to them by EA students who failed to convince them, and who they thought also didn’t have good reason to believe in them.
Another person mentioned that it looked weird to them that EA spent a lot of resources on helping itself. Without looking too closely at it, it looked like the ratio of resources spent on meta EA stuff to directly impactful stuff seemed suspiciously high. Their general thoughts on communities with access to billionaire money, influence, and young people wanting to find a purpose made them assume negative things about EA community as well. This made it harder for them to take some of the EA ideas seriously. I feel sympathetic to this and feel like if I wasn’t already part of the effective altruism community and understood the value in a lot of the EA meta stuff, I would feel similarly suspicious perhaps.
Someone else mentioned that lots of EA people they met came across as young, not very wise, and quite arrogant for their level of experience and knowledge. This put them off. As one example, they had negative experiences with EAs who didn’t have any experience with ML trying to persuade others that AI x-risk was the biggest problem.
Then there was suspicion that EAs, because of their emphasis on utilitarianism, might be willing to do things like lie, break rules, push the big guy in front of the trolley, etc if it were for the “greater good”. This made them hard to trust.
Some people I have briefly talked to mainly thought EA was about earning to give by working for Wall Street, and they thought it was harmful because of that.
I didn’t hear the “EA is too elitist” or “EA isn’t diverse enough” criticisms much (i can’t think of a specific time someone brought that up as a reason they chose not to engage more with EA).
I have talked to some non-EA friends about EA stuff after the FTX crisis (including one who himself lost a lot of money that was on the platform), mostly because they sent me memes about SBF’s effective altruism. My impression was that their opinion (generally mildly positive though not personally enthusiastic) on EA did not change much as a result of FTX. This is unfortunately probably not the case for people who heard about EA for the first time because of FTX—they are more likely to assume bad things about EAs if they don’t know any in real life (and I think this is to some extent, a justified response).
My quick alternative hypotheses: they could also be using disagree vote to mean “I don’t work this many hours / this isn’t normal for me” or “I don’t seriously think you get that many hours of actual work done”.
Besides that, I also think there’s a tendency for people to feel more comfortable reading answers to this question that are on the lower side.
Does your work involve things that require deep focus (eg: substantial programming, research, writing for long periods of time etc.)?
Yeah, that does seem useful.
I still think I’ve found being pseudonymous more useful than writing under my name. It does feel like I’m less restricted in my thinking because I know there are no direct negative or positive effects on me personally for sharing my thoughts. So for example, I’ve found it easier to express genuine appreciation for things or people surprisingly. Perhaps I’m too obsessed with noticing how the shape of my thoughts changes depending on how I think they will be perceived but it has been very interesting to notice that. Like it genuinely feels like there are more thoughts I am allowed to think when I’m trying on a pseudonym (I think this was much starker a few months ago so maybe I’ve squeezed out most of the benefit by now).