The Economist describes EAs as ‘Oxford University philosophers who came up with the name in 2011, New York hedge-fund analysts and Silicon Valley tech bros’; while many might think it’s exaggerated, I think that’s a relevant description of the image that is given by the loudest voices in our community, and if we want to be taken seriously in terms of policy recommendations, we should aim to change this actively.
The frequency at which these things happen is enough to know that these issues are not a one-time, marginal kind of things.
We can do better. We should do better.And if we don’t tackle these issues seriously and keep being in denial, we will be unable to pass AI safety regulations or be taken seriously when we talk about existential risks, because people will brush it off as a ‘tech bro thing’. And I must say, I had the same reaction before reading on the topic, because the offputting aspect of the culture around GCR is so strong that when you do care about that stuff, it’s hard to not be repelled forever. And the external world cares about that stuff, fortunately for me, and unfortunately for some of you!
If you are truly worried about GCR, consider these issues and try to talk about it with community members. We cannot just stay among ourselves and pat ourselves on the back for creating efficient charities. Also talk to me if you recognize this cultural offputtingness I’m talking about: I’m preparing a series of posts on diversity and AI and need to back it up as much as I can, despite the youth of the field.
Downvoting this quicktake won’t make these issues go away; if we are real truth-seekers, we cannot stay in denial.
In my (limited) personal experience, AI safety / longtermism isn’t diverse along racial or gender lines, which probably indicates talented people aren’t being picked up. Seems worth figuring out how to do a better job here. Similarly for EA as a whole, although this varies between cause area (iirc EA animal advocacy has a higher % of women than EA as a whole?)
I’m genuinely unsure how accurate / fair the statement “EA has an issue of sexism” is. But certainly there is a nonzero amount, which is more than there should be, and the accounts of sexism and related unwelcome-attitudes-toward-women in the community make me very sad.
The optimal amount of “cultural offputtingness” is not zero. It should be possible to “keep EA weird” without compromising on racial/gender/etc inclusion, and there are a lot of contingent weird things about EA that aren’t core to its goodness. But there are also a lot of ways I can see a less-weird EA being a less-good EA overall.
The link between increased diversity / decreased tech-bro reputation and passing AI safety regulations seems tenuous to me.
I have a general, vague sense that “do this for PR reasons” is not a good way to get something done well
It doesn’t seem like public perception updates very frequently (to take one example, here’s Fast Company twodaysago saying ETG is the “core premise” of EA). I don’t think we should completely give up here, but unfortunately the “EA = techbro” perception is pretty baked in and I expect it to only change very gradually, if at all.
EA is also not very politically diverse—there are very few Republicans, and even the ones that are around tend to be committed libertarians rather than mainstream GOP voters. If we’re just considering the impact on passing AI safety regulations, having a less left-leaning public image could be more useful. (For the reasons in the two bullet points above though, I’m also skeptical of this idea; I just think it has a similar a priori plausibility.)
On reflection, I think the somewhat combative tone (framing disagreement as “refusal to admit” and being “in denial”) is fine here, but it did negatively color my initial reading, and probably contributed to some downvotes / disagree votes.
(15:45) But the main thing is they’re just giving this overwhelmingly skewed view of the world. And what’s the skew exactly? The obvious one, which I will definitely defend, is an overwhelming left-wing view of the world. Basically, the woke Western view is what you get out of almost all media. Even if you’re reading media in other countries, it’s quite common: the journalists in those other countries are the most Westernised, in the sense of they are part of the woke cult. So there’s that. That’s the one that people complain about the most, and I think those complaints are reasonable. But long before anyone was using the word “woke,” there’s just a bunch of other big problems with the news. The negativity bias: bad, bad, bad, sad, sad, sad, angry, angry, angry.
This wasn’t in the context of civil rights or feminism being discussed, and I couldn’t find any other instances where that was the case. Rob doesn’t comment on the “woke” bit here one way or another, and doesn’t laugh during these paragraphs. So unless there’s an example I missed, I think this characterization is incorrect.
posts on LessWrong talking about foetus’s sentience without mentioning ONCE reproductive rights
This is probably an example of decoupling vs contextualizing norms clashing, but I don’t think I see anything wrong here. Whether or not a fetus is sentient is a question about the world with some correct answer. Reproductive rights also concern fetuses, but don’t have any direct bearing on the factual question; they also tend to provoke heated discussion. So separating out the scientific question and discussing it on its own seems fine.
It is certainly my own fault for not immediately noting down when this happened; might have been an EA-adjacent media.
As for the reproductive rights I disagree. They provoke heated discussion specifically because this is highly important to those it directly concerns, women, because they are the one losing control over their lives if these rights are suppressed. If men were the ones directly affecting, e.g. losing control over their own body and lives, this would be much more mentioned here, but since EA is 70% male, it is not. Raising questions about foetus sentience is fine, but writing these posts without even a mention of reproductive rights hints towards using this thinking to legitimate what is currently happening with RoeVsWade.
It’s a classic EA thing: EAs that are not concerned at all by the topic (reproductive rights, poverty) talk about what to do on a topic without taking into account the perspective of those who actually deal with these issues. And here it was exactly that: a man who had the luxury to raise these questions because his life and body will never suffer of potential consequences of this post.
The results of this post is that talent, i.e. many women, is pushed away, because who wants to stay in a movement that doesn’t care at all about your opinions for things that concern you directly?
Yes, this is exactly the issue. Talent isn’t being picked on. If we are going to do good for future beings we need to take into account as many perspectives as we can instead of staying within the realm of our own male-centered western narratives.
Many posts exist on the EA forum about diversity that show how bad EA can be for women. The Times article on sexual assault is just the tip of the iceberg.
Being weird is fine (eg thinking about far-fetched ideas about the future). Calling out sexism is not incompatible with that.
Thing is doing it just to ‘reduce sexism and improve women wellbeing in EA’ is clearly not a worthy cause for many here. So I guess I have to use arguments that make sense for others. And this is a real issue though—EA ideas and thus funding in the right direction could be so much more widespread and accepted without all these PR scandals.
The hostile tone has to do with being tired of having to advocate for the simpliest things. There are always the same comments on all the posts denouncing diversity issues: ‘it is not a priority’ ‘there is no issue’ ‘stop being a leftist’.
People who downvote have probably not even read the forum post on abuse in the AI spheres, while it shows how ingrained sexism is in this Silicon Valley culture. They don’t care, because it doesn’t concern them. Wanting the wellbeing of animals is all good and fine, but when it comes to women and people of colour, it becomes political, so yeah, there is denial. Animals can’t speak—they can’t upset them. Women and people of colour speak and ask for more justice—and that’s where it becomes political, because then these men have to share power and acknowledge harm. So I don’t think denial is a bad word.
When your life is at stake—when women are being harassed, raped, denied the right to dispose of their own bodies and lives, the tone can get hostile. I have something to lose here; for those who downvote me, it’s just only another intellectual topic. I won’t apologize for that.
I think this kind of discussion is important, and I don’t want to discourage it, but I do think these discussions are more productive when they’re had in a calm manner. I appreciate this can be difficult with emotive topics, but it can be hard to change somebody’s mind if they could interpret your tone as attacking them.
In summary: I think it would be more productive if the discussion could be less hostile going forwards.
Also talk to me if you recognize this cultural offputtingness I’m talking about: I’m preparing a series of posts on diversity and AI and need to back it up as much as I can, despite the youth of the field.
The “please send me supporting anecdotes” method of evidence gathering.
Well that is a step among others, and asking is better than not asking and acting as if there was no issues at all. I didn’t specify the epistemic value I would attribute to these testimonies, so this is a sneaky comment.
But I was expecting you—never fail to comment negatively on posts that dare bringing up these issues. For someone who clearly says in a comment under a post about political opinions on EA that we need more right-wingers in EA and who also says that EA shouldn’t be carrying leftist discourses to avoid being discredited, you sure are consistent in your fights. Nothing much about the content of the post though so I guess you didn’t have much to say aside from inferring the epistemic value I’d put on anecdotal data.
For those who would worry about the ‘personal aspect’ of this comment, understand that when you see a pattern of someone constantly advocating against a topic every time it’s brought up on the topic, it sounds legitimate to me to understand why such a thing happen. There is motivated reasoning here—I don’t expect objectivity on this topic from someone who so openly shows their political camp. Since Larks isn’t attacking anything content-wise about the post other than some assumption on methodology, I do feel justified to note Lark’s lack of objectivity.
That is all I needed to say, there is no need to comment further on my side to avoid escalation. I just want people to have a clear picture of who is commenting here and the motivation behind.
The Economist describes EAs as ‘Oxford University philosophers who came up with the name in 2011, New York hedge-fund analysts and Silicon Valley tech bros’; while many might think it’s exaggerated, I think that’s a relevant description of the image that is given by the loudest voices in our community, and if we want to be taken seriously in terms of policy recommendations, we should aim to change this actively.
Disastrous experiences underwent by women in AI safety (https://forum.effectivealtruism.org/posts/LqjG4bAxHfmHC5iut/why-i-spoke-to-time-magazine-and-my-experience-as-a-female), hosts and guests of the 80k podcast laughing at the ‘wokeness’ of this or that when civil rights/feminism are being brought in a conversation, constant refusal to admit that EA has an issue of sexism and homogenous cultural norms (see all posts related to diversity + https://forum.effectivealtruism.org/posts/W8S3EuYDWYHQxm77u/racial-demographics-at-longtermist-organizations), posts on LessWrong talking about foetus’s sentience without mentioning ONCE reproductive rights, are, I think, strong elements of why we are seen as such a elitist, un-diverse and culturally closed community.
The frequency at which these things happen is enough to know that these issues are not a one-time, marginal kind of things.
We can do better. We should do better. And if we don’t tackle these issues seriously and keep being in denial, we will be unable to pass AI safety regulations or be taken seriously when we talk about existential risks, because people will brush it off as a ‘tech bro thing’. And I must say, I had the same reaction before reading on the topic, because the offputting aspect of the culture around GCR is so strong that when you do care about that stuff, it’s hard to not be repelled forever. And the external world cares about that stuff, fortunately for me, and unfortunately for some of you!
If you are truly worried about GCR, consider these issues and try to talk about it with community members. We cannot just stay among ourselves and pat ourselves on the back for creating efficient charities. Also talk to me if you recognize this cultural offputtingness I’m talking about: I’m preparing a series of posts on diversity and AI and need to back it up as much as I can, despite the youth of the field.
Downvoting this quicktake won’t make these issues go away; if we are real truth-seekers, we cannot stay in denial.
Some reactions I have to this:
In my (limited) personal experience, AI safety / longtermism isn’t diverse along racial or gender lines, which probably indicates talented people aren’t being picked up. Seems worth figuring out how to do a better job here. Similarly for EA as a whole, although this varies between cause area (iirc EA animal advocacy has a higher % of women than EA as a whole?)
I’m genuinely unsure how accurate / fair the statement “EA has an issue of sexism” is. But certainly there is a nonzero amount, which is more than there should be, and the accounts of sexism and related unwelcome-attitudes-toward-women in the community make me very sad.
The optimal amount of “cultural offputtingness” is not zero. It should be possible to “keep EA weird” without compromising on racial/gender/etc inclusion, and there are a lot of contingent weird things about EA that aren’t core to its goodness. But there are also a lot of ways I can see a less-weird EA being a less-good EA overall.
The link between increased diversity / decreased tech-bro reputation and passing AI safety regulations seems tenuous to me.
I have a general, vague sense that “do this for PR reasons” is not a good way to get something done well
It doesn’t seem like public perception updates very frequently (to take one example, here’s Fast Company two days ago saying ETG is the “core premise” of EA). I don’t think we should completely give up here, but unfortunately the “EA = techbro” perception is pretty baked in and I expect it to only change very gradually, if at all.
EA is also not very politically diverse—there are very few Republicans, and even the ones that are around tend to be committed libertarians rather than mainstream GOP voters. If we’re just considering the impact on passing AI safety regulations, having a less left-leaning public image could be more useful. (For the reasons in the two bullet points above though, I’m also skeptical of this idea; I just think it has a similar a priori plausibility.)
On reflection, I think the somewhat combative tone (framing disagreement as “refusal to admit” and being “in denial”) is fine here, but it did negatively color my initial reading, and probably contributed to some downvotes / disagree votes.
Two more nitpicky points:
A google search turned up one instance of a guest discussing wokeness, which was Bryan Caplan discussing why not to read the news:
This wasn’t in the context of civil rights or feminism being discussed, and I couldn’t find any other instances where that was the case. Rob doesn’t comment on the “woke” bit here one way or another, and doesn’t laugh during these paragraphs. So unless there’s an example I missed, I think this characterization is incorrect.
This is probably an example of decoupling vs contextualizing norms clashing, but I don’t think I see anything wrong here. Whether or not a fetus is sentient is a question about the world with some correct answer. Reproductive rights also concern fetuses, but don’t have any direct bearing on the factual question; they also tend to provoke heated discussion. So separating out the scientific question and discussing it on its own seems fine.
It is certainly my own fault for not immediately noting down when this happened; might have been an EA-adjacent media.
As for the reproductive rights I disagree. They provoke heated discussion specifically because this is highly important to those it directly concerns, women, because they are the one losing control over their lives if these rights are suppressed. If men were the ones directly affecting, e.g. losing control over their own body and lives, this would be much more mentioned here, but since EA is 70% male, it is not. Raising questions about foetus sentience is fine, but writing these posts without even a mention of reproductive rights hints towards using this thinking to legitimate what is currently happening with RoeVsWade.
It’s a classic EA thing: EAs that are not concerned at all by the topic (reproductive rights, poverty) talk about what to do on a topic without taking into account the perspective of those who actually deal with these issues. And here it was exactly that: a man who had the luxury to raise these questions because his life and body will never suffer of potential consequences of this post.
The results of this post is that talent, i.e. many women, is pushed away, because who wants to stay in a movement that doesn’t care at all about your opinions for things that concern you directly?
Men and women in the US have only recently (post 2020) started to differ by more than a small margin in their opinions on abortion for what it’s worth: https://news.gallup.com/poll/506759/broader-support-abortion-rights-continues-post-dobbs.aspx
Though that is certainly compatible with women caring more about the opinions on abortion they do hold.
Yes, this is exactly the issue. Talent isn’t being picked on. If we are going to do good for future beings we need to take into account as many perspectives as we can instead of staying within the realm of our own male-centered western narratives.
Many posts exist on the EA forum about diversity that show how bad EA can be for women. The Times article on sexual assault is just the tip of the iceberg.
Being weird is fine (eg thinking about far-fetched ideas about the future). Calling out sexism is not incompatible with that.
Thing is doing it just to ‘reduce sexism and improve women wellbeing in EA’ is clearly not a worthy cause for many here. So I guess I have to use arguments that make sense for others. And this is a real issue though—EA ideas and thus funding in the right direction could be so much more widespread and accepted without all these PR scandals.
The hostile tone has to do with being tired of having to advocate for the simpliest things. There are always the same comments on all the posts denouncing diversity issues: ‘it is not a priority’ ‘there is no issue’ ‘stop being a leftist’.
People who downvote have probably not even read the forum post on abuse in the AI spheres, while it shows how ingrained sexism is in this Silicon Valley culture. They don’t care, because it doesn’t concern them. Wanting the wellbeing of animals is all good and fine, but when it comes to women and people of colour, it becomes political, so yeah, there is denial. Animals can’t speak—they can’t upset them. Women and people of colour speak and ask for more justice—and that’s where it becomes political, because then these men have to share power and acknowledge harm. So I don’t think denial is a bad word.
When your life is at stake—when women are being harassed, raped, denied the right to dispose of their own bodies and lives, the tone can get hostile. I have something to lose here; for those who downvote me, it’s just only another intellectual topic. I won’t apologize for that.
I think this kind of discussion is important, and I don’t want to discourage it, but I do think these discussions are more productive when they’re had in a calm manner. I appreciate this can be difficult with emotive topics, but it can be hard to change somebody’s mind if they could interpret your tone as attacking them.
In summary: I think it would be more productive if the discussion could be less hostile going forwards.
The “please send me supporting anecdotes” method of evidence gathering.
Well that is a step among others, and asking is better than not asking and acting as if there was no issues at all. I didn’t specify the epistemic value I would attribute to these testimonies, so this is a sneaky comment.
But I was expecting you—never fail to comment negatively on posts that dare bringing up these issues. For someone who clearly says in a comment under a post about political opinions on EA that we need more right-wingers in EA and who also says that EA shouldn’t be carrying leftist discourses to avoid being discredited, you sure are consistent in your fights. Nothing much about the content of the post though so I guess you didn’t have much to say aside from inferring the epistemic value I’d put on anecdotal data.
For those who would worry about the ‘personal aspect’ of this comment, understand that when you see a pattern of someone constantly advocating against a topic every time it’s brought up on the topic, it sounds legitimate to me to understand why such a thing happen. There is motivated reasoning here—I don’t expect objectivity on this topic from someone who so openly shows their political camp. Since Larks isn’t attacking anything content-wise about the post other than some assumption on methodology, I do feel justified to note Lark’s lack of objectivity.
That is all I needed to say, there is no need to comment further on my side to avoid escalation. I just want people to have a clear picture of who is commenting here and the motivation behind.