because the view that EA employers are so fragile as to deny job opportunities based on EA Forum hot takes is hopefully greatly exaggerated and very disturbing if not.
Theres at least some evidence to suggest these fears are justified. Take the thankfully scrapped “PELTIV ” proposal for tracking conference attendees:
Individuals were to be assessed along dimensions such as “integrity” or “strategic judgment” and “acting on own direction,” but also on “being value-aligned,” “IQ,” and “conscientiousness.” Real names, people I knew, were listed as test cases, and attached to them was a dollar sign (with an exchange rate of 13 PELTIV points = 1,000 “pledge equivalents” = 3 million “aligned dollars”).
I don’t think it’s unreasonable to be worried that if people are being tracked for their opinions at conferences, their forum presence might also be. I’ll repeat that this proposal was scrapped, but I get why people would be paranoid.
Theres also the allegation in the “doing ea better” post that:
Hiring and funding practices often select for highly value-aligned yet inexperienced individuals over outgroup experts.
If this is true, then criticising EA orthodoxy might make you less “value aligned” in the eyes of EA decision makers, and cost you real money.
Maybe people have started to use “value-aligned” to mean “agrees with everything we say”, but the way I understand it it means “_cares_ about the same things as us”. Being value-aligned does not mean agreeing with you about your strategy, or anything else much. In fact, someone posting a critical screed about your organization on the EA forum is probably some decent evidence that they are value-aligned: they cared enough to turn up in their spare time and talk about how you could do things better (implicit: to achieve the goal you both share).
There are definitely some criticisms that suggest that you might not be value-aligned, but for most of the ones I can think of it seems kind of legitimate to take them into account. e.g. “Given that you wrote the post ‘Why animal suffering is totally irrelevant’, why did you apply to work at ACE?”
So, many things that could be said about PELTIV, but I’m not convinced that filtering for value-alignment filters negatively for criticality, if anything I think it’s the opposite.
There are definitely some criticisms that suggest that you might not be value-aligned, but for most of the ones I can think of it seems kind of legitimate to take them into account. e.g. “Given that you wrote the post ‘Why animal suffering is totally irrelevant’, why did you apply to work at ACE?”
Yeah in contrast I would generally expect a post called “Statistical errors and a lack of biological expertise mean ACE have massively over-estimated chicken suffering relative to fish” to be a positive signal, even though it is clearly very critical.
I agree with you that filtering for alignment is important. The mainstream non-profit space speaks a lot about filtering for “mission fit” and I think that’s a similar concept. Obviously it would be hard to run an animal advocacy org with someone chowing down on chicken sandwiches every day for lunch in the organization cafeteria.
But my hot take for the main place I see this go wrong in EA: Some EAs I have talked to, including some quite senior ones, overuse “this person may be longtermist-adjacent and seem to be well-meaning but they just don’t give me enough vibes that they’re x-risk motivated and no I did not actually ask them about it or press them about this” → “this is not a person I will work with” as a chain of reasoning, to the point of excluding people with nuanced views on longtermism (or just confused views who could learn and improve) and this makes the longtermist community more insular and worse. I think PELTIV and such give a similar take of making snap judgements from afar without actually checking them against reality (though there are other clear problems also).
My other take about where this goes wrong is less hot and basically amounts to “EA still ignores outside expertise too much because the experts don’t give off enough EA vibes”. If I recall correctly, nearly all opinions on wild animal welfare in EA had to be thrown out after discussion with relevant experts.
“this person may be longtermist-adjacent and seem to be well-meaning but they just don’t give me enough vibes that they’re x-risk motivated and no I did not actually ask them about it or press them about this”
Fortunately this can be fixed by publishing pamphlets with the correct sequences of words helpfully provided, and creating public knowledge that if you’re serious about longtermism you just need to whisper the correct sequence of words to the right person at the right time.
Jokes aside, there’s an actual threat of devolving into applause light factories (I’ll omit the rant about how the entire community building enterprise is on thin ice). Indeed, someone at Rethink Priorities once told me they weren’t convinced that the hiring process was doing a good job at separating “knows what they’re talking about, can reason about the problems we’re working on, cares about what we care about” from “ideological passwords, recitation of shibboleths”, or that it was one of the things they really wanted to get right and they weren’t confident they were getting right. It’s not exactly easy.
Yeah I certainly don’t think our hiring process is perfect at this either. These kinds of concerns weigh on me a lot and we’re constantly thinking about how we can get better.
The article specifically claimed “Low PELTIV value was assigned to applicants who worked to reduce global poverty or mitigate climate change, while the highest value was assigned to those who directly worked for EA organizations or on artificial intelligence.” That suggests that a post advocating for a reallocation of effort to the former might be relevant.
I agree that if value-aligned is being used in the sense you are talking about, then it’s fine.
The allegations are that it’s not being used in that sense. That it’s being used to punish people in general for having unorthodox beliefs.
The article I linked states that:
Low PELTIV value was assigned to applicants who worked to reduce global poverty or mitigate climate change, while the highest value was assigned to those who directly worked for EA organizations or on artificial intelligence.
This would be completely fine if you were in an AI risk organisation: obviously you mostly want people who believe in the cause. But this is the centre for effective altruism. It’s meant to be neutral, but this proposal would have directly penalised people for disagreeing with orthodoxy.
It’s not clear from the article whether the high PELTIV score came from high value-alignment scores or something else. If anything, it sounds like there was a separate cause-specific modifier (but it’s very hard to tell). So I don’t think this is much evidence for misuse of “value-aligned”.
Theres at least some evidence to suggest these fears are justified. Take the thankfully scrapped “PELTIV ” proposal for tracking conference attendees:
I don’t think it’s unreasonable to be worried that if people are being tracked for their opinions at conferences, their forum presence might also be. I’ll repeat that this proposal was scrapped, but I get why people would be paranoid.
Theres also the allegation in the “doing ea better” post that:
If this is true, then criticising EA orthodoxy might make you less “value aligned” in the eyes of EA decision makers, and cost you real money.
Maybe people have started to use “value-aligned” to mean “agrees with everything we say”, but the way I understand it it means “_cares_ about the same things as us”. Being value-aligned does not mean agreeing with you about your strategy, or anything else much. In fact, someone posting a critical screed about your organization on the EA forum is probably some decent evidence that they are value-aligned: they cared enough to turn up in their spare time and talk about how you could do things better (implicit: to achieve the goal you both share).
There are definitely some criticisms that suggest that you might not be value-aligned, but for most of the ones I can think of it seems kind of legitimate to take them into account. e.g. “Given that you wrote the post ‘Why animal suffering is totally irrelevant’, why did you apply to work at ACE?”
So, many things that could be said about PELTIV, but I’m not convinced that filtering for value-alignment filters negatively for criticality, if anything I think it’s the opposite.
Yeah in contrast I would generally expect a post called “Statistical errors and a lack of biological expertise mean ACE have massively over-estimated chicken suffering relative to fish” to be a positive signal, even though it is clearly very critical.
I agree with you that filtering for alignment is important. The mainstream non-profit space speaks a lot about filtering for “mission fit” and I think that’s a similar concept. Obviously it would be hard to run an animal advocacy org with someone chowing down on chicken sandwiches every day for lunch in the organization cafeteria.
But my hot take for the main place I see this go wrong in EA: Some EAs I have talked to, including some quite senior ones, overuse “this person may be longtermist-adjacent and seem to be well-meaning but they just don’t give me enough vibes that they’re x-risk motivated and no I did not actually ask them about it or press them about this” → “this is not a person I will work with” as a chain of reasoning, to the point of excluding people with nuanced views on longtermism (or just confused views who could learn and improve) and this makes the longtermist community more insular and worse. I think PELTIV and such give a similar take of making snap judgements from afar without actually checking them against reality (though there are other clear problems also).
My other take about where this goes wrong is less hot and basically amounts to “EA still ignores outside expertise too much because the experts don’t give off enough EA vibes”. If I recall correctly, nearly all opinions on wild animal welfare in EA had to be thrown out after discussion with relevant experts.
Fortunately this can be fixed by publishing pamphlets with the correct sequences of words helpfully provided, and creating public knowledge that if you’re serious about longtermism you just need to whisper the correct sequence of words to the right person at the right time.
Jokes aside, there’s an actual threat of devolving into applause light factories (I’ll omit the rant about how the entire community building enterprise is on thin ice). Indeed, someone at Rethink Priorities once told me they weren’t convinced that the hiring process was doing a good job at separating “knows what they’re talking about, can reason about the problems we’re working on, cares about what we care about” from “ideological passwords, recitation of shibboleths”, or that it was one of the things they really wanted to get right and they weren’t confident they were getting right. It’s not exactly easy.
Yeah I certainly don’t think our hiring process is perfect at this either. These kinds of concerns weigh on me a lot and we’re constantly thinking about how we can get better.
I haven’t seen that but if that’s happening then I agree that’s bad and we should discourage it!
The article specifically claimed “Low PELTIV value was assigned to applicants who worked to reduce global poverty or mitigate climate change, while the highest value was assigned to those who directly worked for EA organizations or on artificial intelligence.” That suggests that a post advocating for a reallocation of effort to the former might be relevant.
I agree that if value-aligned is being used in the sense you are talking about, then it’s fine.
The allegations are that it’s not being used in that sense. That it’s being used to punish people in general for having unorthodox beliefs.
The article I linked states that:
This would be completely fine if you were in an AI risk organisation: obviously you mostly want people who believe in the cause. But this is the centre for effective altruism. It’s meant to be neutral, but this proposal would have directly penalised people for disagreeing with orthodoxy.
It’s not clear from the article whether the high PELTIV score came from high value-alignment scores or something else. If anything, it sounds like there was a separate cause-specific modifier (but it’s very hard to tell). So I don’t think this is much evidence for misuse of “value-aligned”.