The graduate scheme is a good idea, might be worthy to look into who presents there and ask them. I was thinking mostly about newbies but also mid-career professionals who might want to switch fields and go there, since not all of these jobs require some specific technical knowledge that needs to be built through time. Yes it might be useful to have such a list, I might put some hours aside one day and try a draft!
Vaipan
Yeah for sure, but I gave the example of the AI safety page because it would be great to have a sense of the kinds of the roles and competences in general instead of just having some of these roles depending on what is currently on the market. Well it is a confirmation that it might be an interesting project to do, since it doesn’t seem to exist!
[Question] Is there a recap of relevant jobs in the nuclear risk sector/nuclear energy sector for EAs?
That kind of spontaneous research guided by ethics is really one of my favourite aspects of EA. Thank you for undertaking this research and highlighting the very unhealthy dynamics of power exercised by those at the top. We know them, without really knowing about them: a reminder from times to times never hurt.
Personally I don’t feel attacked at all: I am happy that these people give money for useful purposes, and that has nothing to do with calling them out for their behaviours or not. I don’t believe in the ‘don’t bite the hand that feeds you’ kind of thinking. It’s our role, as EAs who benefit more or less distantly from this money, to be very aware of who gives the money and what price has to be paid for that.
When people criticize EA because they are choked that we accept being fed by blood money (this includes billionaires holding large shares in companies that exploit workers), I have no moral qualms to say that I prefer doing something good with this money rather than nothing; I am still working on structural changes with that money. As long as I haven’t read any convincing. data-supported report on how being in EA and using these funds is more harmful than working only at a structural level to change these dynamics, I think that EA remains my best bet.
Yes, I think this is a very useful phenomenon to point at, and some people have a very naïve understanding of what these labs do, especially technical AI safety researchers that have a technical background where skills of critical thinking have not been at the heart of their education. I heard a lot of very candid remarks about the political influence carried out by these labs, and I am worried that these researchers lack a more global understanding of the effects of their work.
Given OpenAI’s recent updates on military bans and transparency of documents, I find myself more and more cautious when it comes to trusting anyone working on AI safety. I would love to see representatives of these labs addressing the concerns raised in this post in a credible way.
The post did a great job at describing exactly what is reproached to Owen. I do not see anyone in the comment claiming that he is more than what is described in the post, and in general, I do not see anything pointing at overaction from anybody.
Citing Epstein looks like a strawman and does not make my point less salient: that some members jumping to defend Owen is an insult to the testimony of these women as if Owen’s good behaviour removed his bad behaviours, and contradictory to what has been courageously empathized in this post, i.e. that EAs knowing each other and defending each other encourage secrecy and overlooking potential serious misconducts.
I would even add that assuming that the community will conflate Owen and Epstein’s case is patronizing and far-fetched; I think that people are able to make the distinction between a sex offender who got jail time and Owen.
I do not know Owen. I am however a bit worried to see two people in these comments advocating for Owen while this affair does not look good and the facts speak for themselves; there is a certain irony to see these two people coming to defend Owen while the community health head, Julia, admits to a certain level of bias when handling this affair since he was her friend. It seems that EA people do not learn from the mistakes that are courageously being owned up here. This posts talks about Owen misbehaving: it does not talk about Owen’s good deeds. So this kind of comment defeats the point of this post.
Can you put yourself two seconds in the shoes of these women who received unwanted and pressing attention from Owen, with all the power dynamics that are involved, reading comments on how Owen is responsible and a great addition to the community, even after women repeatedly complained about him? What I read is ‘He treated me well, so don’t be so quick to dismiss him’ and ‘I’ve dealt with worse cases, so I can assure you this one is not that bad’.
Do you really think that such attitudes encourage women to speak up? Do you really think that this is the place to do this?
Edit: I want to thank those who have written this post and highlight the courage that it has taken. I know that I am flagging up the things that are not okay, e.g these comments, but I am very happy that this does not stay within community building circles and gets shared. Yeay for transparency, yay for honesty, yay for highlighting what is wrong if we want to do better. I know of instances where such bad press has been kept within closed doors, and I’m glad this one has not. So, thanks. I yell a lot (I wish I did not have to, of course) but I also want to be grateful.
Yes, this is exactly the issue. Talent isn’t being picked on. If we are going to do good for future beings we need to take into account as many perspectives as we can instead of staying within the realm of our own male-centered western narratives.
Many posts exist on the EA forum about diversity that show how bad EA can be for women. The Times article on sexual assault is just the tip of the iceberg.
Being weird is fine (eg thinking about far-fetched ideas about the future). Calling out sexism is not incompatible with that.
Thing is doing it just to ‘reduce sexism and improve women wellbeing in EA’ is clearly not a worthy cause for many here. So I guess I have to use arguments that make sense for others. And this is a real issue though—EA ideas and thus funding in the right direction could be so much more widespread and accepted without all these PR scandals.
The hostile tone has to do with being tired of having to advocate for the simpliest things. There are always the same comments on all the posts denouncing diversity issues: ‘it is not a priority’ ‘there is no issue’ ‘stop being a leftist’.
People who downvote have probably not even read the forum post on abuse in the AI spheres, while it shows how ingrained sexism is in this Silicon Valley culture. They don’t care, because it doesn’t concern them. Wanting the wellbeing of animals is all good and fine, but when it comes to women and people of colour, it becomes political, so yeah, there is denial. Animals can’t speak—they can’t upset them. Women and people of colour speak and ask for more justice—and that’s where it becomes political, because then these men have to share power and acknowledge harm. So I don’t think denial is a bad word.
When your life is at stake—when women are being harassed, raped, denied the right to dispose of their own bodies and lives, the tone can get hostile. I have something to lose here; for those who downvote me, it’s just only another intellectual topic. I won’t apologize for that.
It is certainly my own fault for not immediately noting down when this happened; might have been an EA-adjacent media.
As for the reproductive rights I disagree. They provoke heated discussion specifically because this is highly important to those it directly concerns, women, because they are the one losing control over their lives if these rights are suppressed. If men were the ones directly affecting, e.g. losing control over their own body and lives, this would be much more mentioned here, but since EA is 70% male, it is not. Raising questions about foetus sentience is fine, but writing these posts without even a mention of reproductive rights hints towards using this thinking to legitimate what is currently happening with RoeVsWade.
It’s a classic EA thing: EAs that are not concerned at all by the topic (reproductive rights, poverty) talk about what to do on a topic without taking into account the perspective of those who actually deal with these issues. And here it was exactly that: a man who had the luxury to raise these questions because his life and body will never suffer of potential consequences of this post.
The results of this post is that talent, i.e. many women, is pushed away, because who wants to stay in a movement that doesn’t care at all about your opinions for things that concern you directly?
Well that is a step among others, and asking is better than not asking and acting as if there was no issues at all. I didn’t specify the epistemic value I would attribute to these testimonies, so this is a sneaky comment.
But I was expecting you—never fail to comment negatively on posts that dare bringing up these issues. For someone who clearly says in a comment under a post about political opinions on EA that we need more right-wingers in EA and who also says that EA shouldn’t be carrying leftist discourses to avoid being discredited, you sure are consistent in your fights. Nothing much about the content of the post though so I guess you didn’t have much to say aside from inferring the epistemic value I’d put on anecdotal data.
For those who would worry about the ‘personal aspect’ of this comment, understand that when you see a pattern of someone constantly advocating against a topic every time it’s brought up on the topic, it sounds legitimate to me to understand why such a thing happen. There is motivated reasoning here—I don’t expect objectivity on this topic from someone who so openly shows their political camp. Since Larks isn’t attacking anything content-wise about the post other than some assumption on methodology, I do feel justified to note Lark’s lack of objectivity.
That is all I needed to say, there is no need to comment further on my side to avoid escalation. I just want people to have a clear picture of who is commenting here and the motivation behind.
The Economist describes EAs as ‘Oxford University philosophers who came up with the name in 2011, New York hedge-fund analysts and Silicon Valley tech bros’; while many might think it’s exaggerated, I think that’s a relevant description of the image that is given by the loudest voices in our community, and if we want to be taken seriously in terms of policy recommendations, we should aim to change this actively.
Disastrous experiences underwent by women in AI safety (https://forum.effectivealtruism.org/posts/LqjG4bAxHfmHC5iut/why-i-spoke-to-time-magazine-and-my-experience-as-a-female), hosts and guests of the 80k podcast laughing at the ‘wokeness’ of this or that when civil rights/feminism are being brought in a conversation, constant refusal to admit that EA has an issue of sexism and homogenous cultural norms (see all posts related to diversity + https://forum.effectivealtruism.org/posts/W8S3EuYDWYHQxm77u/racial-demographics-at-longtermist-organizations), posts on LessWrong talking about foetus’s sentience without mentioning ONCE reproductive rights, are, I think, strong elements of why we are seen as such a elitist, un-diverse and culturally closed community.
The frequency at which these things happen is enough to know that these issues are not a one-time, marginal kind of things.
We can do better. We should do better. And if we don’t tackle these issues seriously and keep being in denial, we will be unable to pass AI safety regulations or be taken seriously when we talk about existential risks, because people will brush it off as a ‘tech bro thing’. And I must say, I had the same reaction before reading on the topic, because the offputting aspect of the culture around GCR is so strong that when you do care about that stuff, it’s hard to not be repelled forever. And the external world cares about that stuff, fortunately for me, and unfortunately for some of you!
If you are truly worried about GCR, consider these issues and try to talk about it with community members. We cannot just stay among ourselves and pat ourselves on the back for creating efficient charities. Also talk to me if you recognize this cultural offputtingness I’m talking about: I’m preparing a series of posts on diversity and AI and need to back it up as much as I can, despite the youth of the field.
Downvoting this quicktake won’t make these issues go away; if we are real truth-seekers, we cannot stay in denial.
I agree. Involving other actors forces us to examine deeply EA’s weirdness and unappealing behaviours, and brings a ton of experience, network, and amplifies impact.
This is something that I have been seriously thinking about when organizing big projects, especially when it comes to determine the goals of a conference and the actors that we choose to invite. Specifically in a theme such as AI safety, where safety concerns should be propelled and advertised in policy among other non-EA policy actors.
While its great that strong voices in this community such as Ord, MacAskill and others have come guaranteeing Zach’s great qualities, I would like to read about Zach’s concrete work, failures and achievements. This community lives to defer, and this seems like a great representative example of that, unfortunately I’m not sure how relevant it is here.
All I know about him is that he worked on global health and has tight relationships with Anthropic, thus AI safety. It’s great for sure, I love the apparent ability to care about many causes at the same time. But what did he do? What is his path? How did he overcome challenges? What does he fail at? What embodies the most his greatness work-wise, which projects? And more importantly: for what impact?
But a core thing that you don’t mention (maybe because you are a native speaker, and you have to not be one to realize that, not being mean here but simply stating what I think is a fact), is that jargon adds to the effort.
Not only you have to speak a flawless English and not mull over potential mistakes that potentially make you look foolish in the eyes of your interlocutor and reduce the credibility of your discourse, but you also have to use the right jargon. Add saying something meaningful in top of it: you have to pay attention to how you say things (language + jargon) and what you say.
Try handling the feeling of inferiority that inevitably arises when your interlocutor speaks a perfect English and can focus 100% on the content only on what they say while you have to handle the language + the jargon + the meaning behind and that gives you a pretty good mix to feel like a fraud, especially when you disagree with someone. Try a sensitive topic, such as prioritizing x-risk over global health, and add all that mental load. Good luck!
I should have specified: the fellowships I’m talking about are in London/Switzerland. Still expensive but nothing that justifies paying people with barely a bachelor degree and no work experience
Thanks for saying this. Sadly there is a lot of deference when it comes to AI safety and its questionable researchers, and while EA claims it loves criticism, I didn’t meet much love when raising my concerns.
In a group that is composed at 80% by rich white males who have a STEM background where AI safety allows them to get the recognition of their technical skills AND a huge pay, raising such concerns never goes well.
I’m actually preparing a series of post on the lack of diversity within AI and cultural biases will be part of it—how your critical thinking shuts down when it comes to doing work you love, and how evidence that existential risks should be prioritized falls apart under hard criticism (see David Thorsad’s criticism of Bostrom’s famous number 10^16). I expect much pushback and blind denial, as I can see with the comments under my own post that are pretty much just saying ‘AI researchers deserve to be paid well because ML is hard’. I have news: it’s far from unique to AI safety, sadly.
3000 per month for beginners in an AI fellowship is way, way too much.
We need to stop considering machine learning engineers as la crème de la crème and justify these exorbitant salaries based on that assumption. The tractability and impact measuring of the work of these people is highly questionable (the causality series written by RP rates existential risk research at 2 on a scale from 1 to 5, tractability-wise).
We’re talking fellows, so people with very little experience and no certainty of impact at all. You’re comparing this with fully-fledged Anthropic researchers, which doesn’t make sense at all.
And I could talk how these researchers at Anthropic are probably paid way too much for the tractability of their work, but I guess this asks for another post.
That’s one example, it is only one though; many other fellowships are very well-paid, up to 3000 euros per month, I’m thinking SERI/CHERI/CERI
Absolutely, and of course get feedback from these orgs once the draft isn’t a draft anymore. Amateurism in EA when it comes to nuclear risks have been denounced more than once, so will try to steer clear of that!