There was a vague tone of “the goal is to get accepted to EAG” instead of “the goal is to make the world better,” which I felt a bit uneasy about when reading the post. EAGs are only useful in so far as they let community members to better work in the real world.
Hm, I understand why you say that, and you might be right (e.g., I see some signs of the OP that are compatible with this interpretation). Still, I want to point out that there’s a risk of being a bit uncharitable. It seems worth saying that anyone who cares a lot about having a lot of impact should naturally try hard to get accepted to EAG (assuming that they see concrete ways to benefit from it). Therefore, the fact that someone seems to be trying hard can also be evidence that EA is very important to them. Especially when you’re working on a cause area that is under-represented among EAG-attending EAs, like animal welfare, it might matter more (based on your personal moral and empirical views) to get invited.[1]
Compare the following two scenarios. If you’re the 130th applicant focused on trying out AI safety research and the conference committee decides that they think the AI safety conversations at the conference will be more productive without you in expectation because they think other candidates are better suited, you might react to these news in a saint-like way. You might think: “Okay, at least this means others get to reduce AI safety effectively, which benefits my understanding of doing the most good.” By contrast, imagine you get rejected as an advocate for animal welfare. In that situation, you might legitimately worry that your cause area – which you naturally could think is especially important at least according to your moral views and empirical views – ends up neglected. Accordingly, the saint-like reaction of “at least the conference will be impactful without me” doesn’t feel as appropriate (it might be more impactful based on other people’s moral and empirical views, but not necessarily yours). (That doesn’t mean that people from under-represented cause areas should be included just for the sake of better representation, nor that everyone with an empirical view that differs from what’s common in EA is entitled to have their perspective validated. I’m just pointing out that we can’t fault people from under-represented cause areas for thinking that it’s altruistically important for them to get invited – that’s what’s rational when you worry that the conference wouldn’t represent your cause area all that well otherwise. [Even so, I also think it’s important for everyone to be understanding of others’ perspectives on this. E.g., if lots of people don’t share your views, you simply can’t be too entitled about getting representation because a norm that gave all rare views a lot of representation would lead to a chaotic and scattered and low-quality conference. Besides, if your views or cause area are too uncommon, you may not benefit from the conference as much, anyway.]
I strongly agree with this. And your footnote example is also excellent-excellent. I don’t see why it isn’t obvious that Constance’s goal of getting into EAG is merely intrumental to her larger goal of making the world a better place (primarily for animal suffering since that is what she currently seems to believe is the world’s most pressing issue).
Hm, I understand why you say that, and you might be right (e.g., I see some signs of the OP that are compatible with this interpretation). Still, I want to point out that there’s a risk of being a bit uncharitable. It seems worth saying that anyone who cares a lot about having a lot of impact should naturally try hard to get accepted to EAG (assuming that they see concrete ways to benefit from it). Therefore, the fact that someone seems to be trying hard can also be evidence that EA is very important to them. Especially when you’re working on a cause area that is under-represented among EAG-attending EAs, like animal welfare, it might matter more (based on your personal moral and empirical views) to get invited.[1]
Compare the following two scenarios. If you’re the 130th applicant focused on trying out AI safety research and the conference committee decides that they think the AI safety conversations at the conference will be more productive without you in expectation because they think other candidates are better suited, you might react to these news in a saint-like way. You might think: “Okay, at least this means others get to reduce AI safety effectively, which benefits my understanding of doing the most good.” By contrast, imagine you get rejected as an advocate for animal welfare. In that situation, you might legitimately worry that your cause area – which you naturally could think is especially important at least according to your moral views and empirical views – ends up neglected. Accordingly, the saint-like reaction of “at least the conference will be impactful without me” doesn’t feel as appropriate (it might be more impactful based on other people’s moral and empirical views, but not necessarily yours). (That doesn’t mean that people from under-represented cause areas should be included just for the sake of better representation, nor that everyone with an empirical view that differs from what’s common in EA is entitled to have their perspective validated. I’m just pointing out that we can’t fault people from under-represented cause areas for thinking that it’s altruistically important for them to get invited – that’s what’s rational when you worry that the conference wouldn’t represent your cause area all that well otherwise. [Even so, I also think it’s important for everyone to be understanding of others’ perspectives on this. E.g., if lots of people don’t share your views, you simply can’t be too entitled about getting representation because a norm that gave all rare views a lot of representation would lead to a chaotic and scattered and low-quality conference. Besides, if your views or cause area are too uncommon, you may not benefit from the conference as much, anyway.]
I strongly agree with this. And your footnote example is also excellent-excellent. I don’t see why it isn’t obvious that Constance’s goal of getting into EAG is merely intrumental to her larger goal of making the world a better place (primarily for animal suffering since that is what she currently seems to believe is the world’s most pressing issue).