I’m honestly confused and surprised you got rejected, based on reading your linked application. I would probably have found it valuable to talk to you at a conference like this, for insights into how you do what you do, because you clearly do some of it well.
I just really hope it isn’t anti-animal-welfare bias, because I do so hope that EAs with different priorities keep intermingling.
Thanks for flagging this concern. I was worried someone might get the impression that this was related to animal welfare. While we don’t discuss the specifics of people’s applications publicly, that is definitely not the reason: we don’t penalize people for favoring animal welfare, global health, or existential risk reduction (or other prominent EA approaches).
I think this activity would likely do more harm than good as it would be easy to entrench the initial three cause areas involved in EA and privilege them over newer areas
I still feel like it’s in the water. But maybe the suspicion and the public speculation is what keeps it there. If everyone openly speculates on whether there’s a widespread anti-animal-welfare bias in EA, it fuels distrust and schism and thereby makes it be a bias for the opposing side.
On the other hand, positively speaking about the value of big-tent EA, intermingling, and small-world networks, may make people pay more attention to incipient distrust and try to heal it. We want a world where aspiring EAs can find other EAs to talk to about all potential causes, lest their prioritisation be overly influenced by their arbitrary initial positions on the social graph.
I’m honestly confused and surprised you got rejected, based on reading your linked application. I would probably have found it valuable to talk to you at a conference like this, for insights into how you do what you do, because you clearly do some of it well.
I just really hope it isn’t anti-animal-welfare bias, because I do so hope that EAs with different priorities keep intermingling.
Thanks for flagging this concern. I was worried someone might get the impression that this was related to animal welfare. While we don’t discuss the specifics of people’s applications publicly, that is definitely not the reason: we don’t penalize people for favoring animal welfare, global health, or existential risk reduction (or other prominent EA approaches).
I expect that application evaluators are unconsciously biased against animal welfare as a cause area.
How feasible is it to start collecting data on applicants’ primary cause areas and publishing acceptance ratios for people focused on different areas?
I think this activity would likely do more harm than good as it would be easy to entrench the initial three cause areas involved in EA and privilege them over newer areas
Mh, I’m relieved. ^^
I still feel like it’s in the water. But maybe the suspicion and the public speculation is what keeps it there. If everyone openly speculates on whether there’s a widespread anti-animal-welfare bias in EA, it fuels distrust and schism and thereby makes it be a bias for the opposing side.
On the other hand, positively speaking about the value of big-tent EA, intermingling, and small-world networks, may make people pay more attention to incipient distrust and try to heal it. We want a world where aspiring EAs can find other EAs to talk to about all potential causes, lest their prioritisation be overly influenced by their arbitrary initial positions on the social graph.