Here’s my take for why this might not be a serious issue:
It’s very common in other forums I’m a part of for literally everyone except me to be various levels of pseudonymous and I find it pretty rare that people use their real name, especially for giving spicy takes that people disagree with. I’m actually pretty pleasantly surprised by the number of people in EA who give spicy takes with their real name attached.
Likewise, I think it’s pretty natural to want to be able to speak unfiltered without having to worry about how what you say will affect your reputation, either rightly or wrongly. Especially given how bad tail risks to your reputation can be, I understand the fear to use a real name to challenge the status quo and I’m super glad people still have an outlet to give critical spicy feedback. Jeff Kaufman writes about Responsible Transparency Consumption—plus what Holden said—and this is a norm I very much value and I think is higher in EA than elsewhere, but still not a guarantee.
That being said, I am definitely super duper concerned by the number of people who say something literally like “posting anonymously because I don’t want to lose out on jobs or tank my career” because the view that EA employers are so fragile as to deny job opportunities based on EA Forum hot takes is hopefully greatly exaggerated and very disturbing if not. (At Rethink Priorities, we don’t take people’s EA Forum user history into account at all when hiring and several of the EA Forum users with the highest total karma work at RP.)
Also obviously the fact that there is so much stuff right now that people feel the need to strongly but anonymously criticize—the object-level stuff is very concerning and a serious issue.
the view that EA employers are so fragile as to deny job opportunities based on EA Forum hot takes is hopefully greatly exaggerated and very disturbing if not.
In my personal experience most EA orgs have been extremely tolerant of even harsh criticism. I would guess that criticising CEA, MIRI, CSER, GWWC, LTFF, EVF, CE, OP, GCRI, APPGFG, GAP (and probably others I have forgotten) has been overall positive for me. I don’t know of any other movement where leaders are as willing to roll with the punches as they are here and I think they deserve a lot of credit for that.
On the other hand, ACE did attempt to cancel and defund Anima because of some hot takes on facebook, so maybe things are worse in the animal welfare side of things that I am less familiar with.
I agree with your first paragraph. I disagree about your read of the ACE <> Anima situation, but there’s no need to re-litigate that here. Regardless of your feelings about the Anima situation, I do think ACE has been criticized so many times in so many different ways by so many different people and they have been good about it.
That link says “Attempting to cancel an animal rights conference speaker because of his views on Black Lives Matter” which I think is different from objecting to criticism about ACE itself?
At Rethink Priorities, we don’t take people’s EA Forum user history into account at all when hiring
This seems incredibly surprising to me. Someone writes the best post in existence on whether mealworms suffer, you are considering hiring them to research invertebrate sentience, and you’re like “don’t care – your research was posted on the forum so I’m not going to look at it?”
Do you look at anything people have done before working at RP?
Disclaimer: Work at RP, but am not speaking on behalf of RP here or those involved in hiring processes.
I think this is basically accurate for the standard hiring round, depending on what you consider “when hiring”. For example, my understanding is that knowledge of an author who has written a post you described would likely contribute to whether RP reaches out to people inviting them to apply for a role (though this bar is much, much lower than whether someone has authored such a post), but these invitation confers no advantage during the hiring process itself, which leans rather heavily on the test task / skills assessment portions of the hiring process (one example here).
RP also aims to not select for knowledge and experience in EA beyond the extent that it is relevant for the specific role they are applying for (and keeping in mind that much of EA knowledge, like other knowledge, can be learned). My personal impression is that having a process of checking EA forum history, even if this was somehow blinded, would risk biasing this in favour of active EAF users in a way that was not reliably predictive for selecting the best candidate.
I have less insight into the processes that do not fit this standard hiring model (e.g. choosing who to reach out to as contractors).
Do you look at anything people have done before working at RP?
Not really. We mainly have people do test tasks. Usually people who have written really good things on the EA Forum also do really well on test tasks anyways.
In the past we have had people submit writing samples and we’ve graded those. That writing sample could be an EA Forum post. So in that case, a great EA Forum post could directly affect whether you get hired.
Definitely open to this being a bad approach.
The main thing I was trying to get at though is that having downvoted EA Forum comments or participating in some spicy takes doesn’t affect RP employment.
It’s very common in other forums I’m a part of for literally everyone except me to be various levels of pseudonymous and I find it pretty rare that people use their real name, especially for giving spicy takes that people disagree with. I’m actually pretty pleasantly surprised by the number of people in EA who give spicy takes with their real name attached.
I think there might be an important difference between pseudonymous and burner accounts here. I basically have no problem with consistent pseudonymous identities, whereas I share the feeling that having a bunch of throwaway accounts posting anonymous complaints is kind of bad.
Yeah I think that’s right. Though it’s hard to know if the burner or throwaway accounts belong to people who otherwise post on the EA Forum under a pseudonymous account—it could be burners created by people who otherwise post under their real name or burners created by people who don’t really use the EA Forum account.
because the view that EA employers are so fragile as to deny job opportunities based on EA Forum hot takes is hopefully greatly exaggerated and very disturbing if not.
Theres at least some evidence to suggest these fears are justified. Take the thankfully scrapped “PELTIV ” proposal for tracking conference attendees:
Individuals were to be assessed along dimensions such as “integrity” or “strategic judgment” and “acting on own direction,” but also on “being value-aligned,” “IQ,” and “conscientiousness.” Real names, people I knew, were listed as test cases, and attached to them was a dollar sign (with an exchange rate of 13 PELTIV points = 1,000 “pledge equivalents” = 3 million “aligned dollars”).
I don’t think it’s unreasonable to be worried that if people are being tracked for their opinions at conferences, their forum presence might also be. I’ll repeat that this proposal was scrapped, but I get why people would be paranoid.
Theres also the allegation in the “doing ea better” post that:
Hiring and funding practices often select for highly value-aligned yet inexperienced individuals over outgroup experts.
If this is true, then criticising EA orthodoxy might make you less “value aligned” in the eyes of EA decision makers, and cost you real money.
Maybe people have started to use “value-aligned” to mean “agrees with everything we say”, but the way I understand it it means “_cares_ about the same things as us”. Being value-aligned does not mean agreeing with you about your strategy, or anything else much. In fact, someone posting a critical screed about your organization on the EA forum is probably some decent evidence that they are value-aligned: they cared enough to turn up in their spare time and talk about how you could do things better (implicit: to achieve the goal you both share).
There are definitely some criticisms that suggest that you might not be value-aligned, but for most of the ones I can think of it seems kind of legitimate to take them into account. e.g. “Given that you wrote the post ‘Why animal suffering is totally irrelevant’, why did you apply to work at ACE?”
So, many things that could be said about PELTIV, but I’m not convinced that filtering for value-alignment filters negatively for criticality, if anything I think it’s the opposite.
There are definitely some criticisms that suggest that you might not be value-aligned, but for most of the ones I can think of it seems kind of legitimate to take them into account. e.g. “Given that you wrote the post ‘Why animal suffering is totally irrelevant’, why did you apply to work at ACE?”
Yeah in contrast I would generally expect a post called “Statistical errors and a lack of biological expertise mean ACE have massively over-estimated chicken suffering relative to fish” to be a positive signal, even though it is clearly very critical.
I agree with you that filtering for alignment is important. The mainstream non-profit space speaks a lot about filtering for “mission fit” and I think that’s a similar concept. Obviously it would be hard to run an animal advocacy org with someone chowing down on chicken sandwiches every day for lunch in the organization cafeteria.
But my hot take for the main place I see this go wrong in EA: Some EAs I have talked to, including some quite senior ones, overuse “this person may be longtermist-adjacent and seem to be well-meaning but they just don’t give me enough vibes that they’re x-risk motivated and no I did not actually ask them about it or press them about this” → “this is not a person I will work with” as a chain of reasoning, to the point of excluding people with nuanced views on longtermism (or just confused views who could learn and improve) and this makes the longtermist community more insular and worse. I think PELTIV and such give a similar take of making snap judgements from afar without actually checking them against reality (though there are other clear problems also).
My other take about where this goes wrong is less hot and basically amounts to “EA still ignores outside expertise too much because the experts don’t give off enough EA vibes”. If I recall correctly, nearly all opinions on wild animal welfare in EA had to be thrown out after discussion with relevant experts.
“this person may be longtermist-adjacent and seem to be well-meaning but they just don’t give me enough vibes that they’re x-risk motivated and no I did not actually ask them about it or press them about this”
Fortunately this can be fixed by publishing pamphlets with the correct sequences of words helpfully provided, and creating public knowledge that if you’re serious about longtermism you just need to whisper the correct sequence of words to the right person at the right time.
Jokes aside, there’s an actual threat of devolving into applause light factories (I’ll omit the rant about how the entire community building enterprise is on thin ice). Indeed, someone at Rethink Priorities once told me they weren’t convinced that the hiring process was doing a good job at separating “knows what they’re talking about, can reason about the problems we’re working on, cares about what we care about” from “ideological passwords, recitation of shibboleths”, or that it was one of the things they really wanted to get right and they weren’t confident they were getting right. It’s not exactly easy.
Yeah I certainly don’t think our hiring process is perfect at this either. These kinds of concerns weigh on me a lot and we’re constantly thinking about how we can get better.
The article specifically claimed “Low PELTIV value was assigned to applicants who worked to reduce global poverty or mitigate climate change, while the highest value was assigned to those who directly worked for EA organizations or on artificial intelligence.” That suggests that a post advocating for a reallocation of effort to the former might be relevant.
I agree that if value-aligned is being used in the sense you are talking about, then it’s fine.
The allegations are that it’s not being used in that sense. That it’s being used to punish people in general for having unorthodox beliefs.
The article I linked states that:
Low PELTIV value was assigned to applicants who worked to reduce global poverty or mitigate climate change, while the highest value was assigned to those who directly worked for EA organizations or on artificial intelligence.
This would be completely fine if you were in an AI risk organisation: obviously you mostly want people who believe in the cause. But this is the centre for effective altruism. It’s meant to be neutral, but this proposal would have directly penalised people for disagreeing with orthodoxy.
It’s not clear from the article whether the high PELTIV score came from high value-alignment scores or something else. If anything, it sounds like there was a separate cause-specific modifier (but it’s very hard to tell). So I don’t think this is much evidence for misuse of “value-aligned”.
A few months ago I would have easily agreed with “the view that EA employers are so fragile as to deny job opportunities based on EA Forum hot takes is hopefully greatly exaggerated and very disturbing if not.”
However, then I read about the hiring practices at FTX, and significantly updated on this. It’s now hard for me to believe that at least some EA employers would not deny job opportunities based on EA forum hot takes!
Where is there info on hiring practices at FTX? I don’t remember seeing this and would be interested.
More generally, I would be really interested in hearing about particular examples of people being denied job opportunities in EA roles because of opinions they share on the EA Forum (this would worry me a lot).
The only rumor I’ve heard is that someone was once denied an opportunity because they were deemed not a longtermist and the only way the org could’ve known the person was not a longtermist was from their public writing, and I personally wasn’t sold that strongly holding longtermist values was a key requirement for the position. That being said, I’ve only heard it from the person who didn’t get hired and possible that I may be substantially misunderstanding the situation.
I definitely would like to hear other people’s views on this, from burner accounts if need be.
Huh, I feel mixed about this. I want there to be ways and places to just talk and not have an all things considered opinions and not be too strongly judged for it (and I know some people hold to a “what’s the best thing this person has done/said” standard rather than “what’s the quality of the average thing they said”), for epistemics and probably because it’s sensible in a bunch of ways, but it would also be confusing to me if people’s behavior on the public internet didn’t give evidence of the kind of employee they are or their views in ways that might matter? Maybe we’re just litigating how much of a grace buffer there should be (where maybe we agree it should be pretty big).
Good god I certainly hope any practices at FTX are not common at other EA orgs.
FTX seems to have been a trash fire in many different respects at once, but the above sentence seems super hyperbolic (you hope zero practices at FTX are common at EA orgs??), and I don’t know what the non-hyperbolic version of it in your mind is.
I’m somewhat wary of revisionist history to make it sound like FTX was more wildly disjoint from EA culture or social networks than it in fact was, at least in the absence of concrete details about what it was actually like to work there.
Yes, my statement was intentionally hyperbolic. I definitely did not mean to say that there are absolutely zero practices at FTX that I like, nor did I mean to suggest that FTX is disjoint from EA culture (though I know so little about what FTX was like or what EA culture is like outside of RP that it is hard for me to say).
The base rate I have in mind is that FTX had access to a gusher of easy money, run by young energetic people with minimal oversight and a limited usage of formalized hiring systems. That produced a situation where top management’s opinion was the critical factor in who got promoted or hired into influential positions. The more that other EA organizations resemble FTX, the stronger I would think this.
I suspect “easy money” is an important risk factor for “top management’s opinion [is] the critical factor in who got promoted or hired into influential positions” but it certainly doesn’t have to be the case!
This is not just a question of the attitude of EA employers but of wider society. I have been involved in EA for a long time but now work in a professional role where reputation is a concern, so do all my online activity pseudonymously.
I would dislike it if it became the norm that people could only be taken seriously if they posted under their real names, and discussion was reserved for “professional EAs”. And that would probably be bad for the variety of perspectives and expertise in EA discussions.
Here’s my take for why this might not be a serious issue:
It’s very common in other forums I’m a part of for literally everyone except me to be various levels of pseudonymous and I find it pretty rare that people use their real name, especially for giving spicy takes that people disagree with. I’m actually pretty pleasantly surprised by the number of people in EA who give spicy takes with their real name attached.
Likewise, I think it’s pretty natural to want to be able to speak unfiltered without having to worry about how what you say will affect your reputation, either rightly or wrongly. Especially given how bad tail risks to your reputation can be, I understand the fear to use a real name to challenge the status quo and I’m super glad people still have an outlet to give critical spicy feedback. Jeff Kaufman writes about Responsible Transparency Consumption—plus what Holden said—and this is a norm I very much value and I think is higher in EA than elsewhere, but still not a guarantee.
That being said, I am definitely super duper concerned by the number of people who say something literally like “posting anonymously because I don’t want to lose out on jobs or tank my career” because the view that EA employers are so fragile as to deny job opportunities based on EA Forum hot takes is hopefully greatly exaggerated and very disturbing if not. (At Rethink Priorities, we don’t take people’s EA Forum user history into account at all when hiring and several of the EA Forum users with the highest total karma work at RP.)
Also obviously the fact that there is so much stuff right now that people feel the need to strongly but anonymously criticize—the object-level stuff is very concerning and a serious issue.
In my personal experience most EA orgs have been extremely tolerant of even harsh criticism. I would guess that criticising CEA, MIRI, CSER, GWWC, LTFF, EVF, CE, OP, GCRI, APPGFG, GAP (and probably others I have forgotten) has been overall positive for me. I don’t know of any other movement where leaders are as willing to roll with the punches as they are here and I think they deserve a lot of credit for that.
On the other hand, ACE did attempt to cancel and defund Anima because of some hot takes on facebook, so maybe things are worse in the animal welfare side of things that I am less familiar with.
I agree with your first paragraph. I disagree about your read of the ACE <> Anima situation, but there’s no need to re-litigate that here. Regardless of your feelings about the Anima situation, I do think ACE has been criticized so many times in so many different ways by so many different people and they have been good about it.
That link says “Attempting to cancel an animal rights conference speaker because of his views on Black Lives Matter” which I think is different from objecting to criticism about ACE itself?
This seems incredibly surprising to me. Someone writes the best post in existence on whether mealworms suffer, you are considering hiring them to research invertebrate sentience, and you’re like “don’t care – your research was posted on the forum so I’m not going to look at it?”
Do you look at anything people have done before working at RP?
Disclaimer: Work at RP, but am not speaking on behalf of RP here or those involved in hiring processes.
I think this is basically accurate for the standard hiring round, depending on what you consider “when hiring”. For example, my understanding is that knowledge of an author who has written a post you described would likely contribute to whether RP reaches out to people inviting them to apply for a role (though this bar is much, much lower than whether someone has authored such a post), but these invitation confers no advantage during the hiring process itself, which leans rather heavily on the test task / skills assessment portions of the hiring process (one example here).
RP also aims to not select for knowledge and experience in EA beyond the extent that it is relevant for the specific role they are applying for (and keeping in mind that much of EA knowledge, like other knowledge, can be learned). My personal impression is that having a process of checking EA forum history, even if this was somehow blinded, would risk biasing this in favour of active EAF users in a way that was not reliably predictive for selecting the best candidate.
I have less insight into the processes that do not fit this standard hiring model (e.g. choosing who to reach out to as contractors).
Not really. We mainly have people do test tasks. Usually people who have written really good things on the EA Forum also do really well on test tasks anyways.
In the past we have had people submit writing samples and we’ve graded those. That writing sample could be an EA Forum post. So in that case, a great EA Forum post could directly affect whether you get hired.
Definitely open to this being a bad approach.
The main thing I was trying to get at though is that having downvoted EA Forum comments or participating in some spicy takes doesn’t affect RP employment.
Interesting, thanks!
I think there might be an important difference between pseudonymous and burner accounts here. I basically have no problem with consistent pseudonymous identities, whereas I share the feeling that having a bunch of throwaway accounts posting anonymous complaints is kind of bad.
Yeah I think that’s right. Though it’s hard to know if the burner or throwaway accounts belong to people who otherwise post on the EA Forum under a pseudonymous account—it could be burners created by people who otherwise post under their real name or burners created by people who don’t really use the EA Forum account.
Theres at least some evidence to suggest these fears are justified. Take the thankfully scrapped “PELTIV ” proposal for tracking conference attendees:
I don’t think it’s unreasonable to be worried that if people are being tracked for their opinions at conferences, their forum presence might also be. I’ll repeat that this proposal was scrapped, but I get why people would be paranoid.
Theres also the allegation in the “doing ea better” post that:
If this is true, then criticising EA orthodoxy might make you less “value aligned” in the eyes of EA decision makers, and cost you real money.
Maybe people have started to use “value-aligned” to mean “agrees with everything we say”, but the way I understand it it means “_cares_ about the same things as us”. Being value-aligned does not mean agreeing with you about your strategy, or anything else much. In fact, someone posting a critical screed about your organization on the EA forum is probably some decent evidence that they are value-aligned: they cared enough to turn up in their spare time and talk about how you could do things better (implicit: to achieve the goal you both share).
There are definitely some criticisms that suggest that you might not be value-aligned, but for most of the ones I can think of it seems kind of legitimate to take them into account. e.g. “Given that you wrote the post ‘Why animal suffering is totally irrelevant’, why did you apply to work at ACE?”
So, many things that could be said about PELTIV, but I’m not convinced that filtering for value-alignment filters negatively for criticality, if anything I think it’s the opposite.
Yeah in contrast I would generally expect a post called “Statistical errors and a lack of biological expertise mean ACE have massively over-estimated chicken suffering relative to fish” to be a positive signal, even though it is clearly very critical.
I agree with you that filtering for alignment is important. The mainstream non-profit space speaks a lot about filtering for “mission fit” and I think that’s a similar concept. Obviously it would be hard to run an animal advocacy org with someone chowing down on chicken sandwiches every day for lunch in the organization cafeteria.
But my hot take for the main place I see this go wrong in EA: Some EAs I have talked to, including some quite senior ones, overuse “this person may be longtermist-adjacent and seem to be well-meaning but they just don’t give me enough vibes that they’re x-risk motivated and no I did not actually ask them about it or press them about this” → “this is not a person I will work with” as a chain of reasoning, to the point of excluding people with nuanced views on longtermism (or just confused views who could learn and improve) and this makes the longtermist community more insular and worse. I think PELTIV and such give a similar take of making snap judgements from afar without actually checking them against reality (though there are other clear problems also).
My other take about where this goes wrong is less hot and basically amounts to “EA still ignores outside expertise too much because the experts don’t give off enough EA vibes”. If I recall correctly, nearly all opinions on wild animal welfare in EA had to be thrown out after discussion with relevant experts.
Fortunately this can be fixed by publishing pamphlets with the correct sequences of words helpfully provided, and creating public knowledge that if you’re serious about longtermism you just need to whisper the correct sequence of words to the right person at the right time.
Jokes aside, there’s an actual threat of devolving into applause light factories (I’ll omit the rant about how the entire community building enterprise is on thin ice). Indeed, someone at Rethink Priorities once told me they weren’t convinced that the hiring process was doing a good job at separating “knows what they’re talking about, can reason about the problems we’re working on, cares about what we care about” from “ideological passwords, recitation of shibboleths”, or that it was one of the things they really wanted to get right and they weren’t confident they were getting right. It’s not exactly easy.
Yeah I certainly don’t think our hiring process is perfect at this either. These kinds of concerns weigh on me a lot and we’re constantly thinking about how we can get better.
I haven’t seen that but if that’s happening then I agree that’s bad and we should discourage it!
The article specifically claimed “Low PELTIV value was assigned to applicants who worked to reduce global poverty or mitigate climate change, while the highest value was assigned to those who directly worked for EA organizations or on artificial intelligence.” That suggests that a post advocating for a reallocation of effort to the former might be relevant.
I agree that if value-aligned is being used in the sense you are talking about, then it’s fine.
The allegations are that it’s not being used in that sense. That it’s being used to punish people in general for having unorthodox beliefs.
The article I linked states that:
This would be completely fine if you were in an AI risk organisation: obviously you mostly want people who believe in the cause. But this is the centre for effective altruism. It’s meant to be neutral, but this proposal would have directly penalised people for disagreeing with orthodoxy.
It’s not clear from the article whether the high PELTIV score came from high value-alignment scores or something else. If anything, it sounds like there was a separate cause-specific modifier (but it’s very hard to tell). So I don’t think this is much evidence for misuse of “value-aligned”.
A few months ago I would have easily agreed with “the view that EA employers are so fragile as to deny job opportunities based on EA Forum hot takes is hopefully greatly exaggerated and very disturbing if not.”
However, then I read about the hiring practices at FTX, and significantly updated on this. It’s now hard for me to believe that at least some EA employers would not deny job opportunities based on EA forum hot takes!
Where is there info on hiring practices at FTX? I don’t remember seeing this and would be interested.
More generally, I would be really interested in hearing about particular examples of people being denied job opportunities in EA roles because of opinions they share on the EA Forum (this would worry me a lot).
The only rumor I’ve heard is that someone was once denied an opportunity because they were deemed not a longtermist and the only way the org could’ve known the person was not a longtermist was from their public writing, and I personally wasn’t sold that strongly holding longtermist values was a key requirement for the position. That being said, I’ve only heard it from the person who didn’t get hired and possible that I may be substantially misunderstanding the situation.
I definitely would like to hear other people’s views on this, from burner accounts if need be.
Huh, I feel mixed about this. I want there to be ways and places to just talk and not have an all things considered opinions and not be too strongly judged for it (and I know some people hold to a “what’s the best thing this person has done/said” standard rather than “what’s the quality of the average thing they said”), for epistemics and probably because it’s sensible in a bunch of ways, but it would also be confusing to me if people’s behavior on the public internet didn’t give evidence of the kind of employee they are or their views in ways that might matter? Maybe we’re just litigating how much of a grace buffer there should be (where maybe we agree it should be pretty big).
Good god I certainly hope any
practices(EDIT: meaning specifically the obvious trash fire practices) at FTX are not common at other EA orgs.To be clear, I only really know how Rethink Priorities operates and I have minimal insight into the operations of other groups.
FTX seems to have been a trash fire in many different respects at once, but the above sentence seems super hyperbolic (you hope zero practices at FTX are common at EA orgs??), and I don’t know what the non-hyperbolic version of it in your mind is.
I’m somewhat wary of revisionist history to make it sound like FTX was more wildly disjoint from EA culture or social networks than it in fact was, at least in the absence of concrete details about what it was actually like to work there.
Yes, my statement was intentionally hyperbolic. I definitely did not mean to say that there are absolutely zero practices at FTX that I like, nor did I mean to suggest that FTX is disjoint from EA culture (though I know so little about what FTX was like or what EA culture is like outside of RP that it is hard for me to say).
The base rate I have in mind is that FTX had access to a gusher of easy money, run by young energetic people with minimal oversight and a limited usage of formalized hiring systems. That produced a situation where top management’s opinion was the critical factor in who got promoted or hired into influential positions. The more that other EA organizations resemble FTX, the stronger I would think this.
I suspect “easy money” is an important risk factor for “top management’s opinion [is] the critical factor in who got promoted or hired into influential positions” but it certainly doesn’t have to be the case!
do you mean ftx the exchange or ftx the future fund?
This is not just a question of the attitude of EA employers but of wider society. I have been involved in EA for a long time but now work in a professional role where reputation is a concern, so do all my online activity pseudonymously.
I would dislike it if it became the norm that people could only be taken seriously if they posted under their real names, and discussion was reserved for “professional EAs”. And that would probably be bad for the variety of perspectives and expertise in EA discussions.