Sorry that your experience of this has been rough.
Some quick thoughts I had whilst reading:
There was a vague tone of “the goal is to get accepted to EAG” instead of “the goal is to make the world better,” which I felt a bit uneasy about when reading the post. EAGs are only useful in so far as they let community members to better work in the real world.
Because of this, I don’t feel strongly about the EAG team providing feedback to people on why they were rejected. The EAG team’s goals isn’t to advise on how applicants can fill up their “EA resume.” It’s to facilitate impactful work in the world.
I remembered a comment that I really liked from Eli: “EAG exists to make the world a better place, rather than serve the EA community or make EAs happy.”
[EDIT after 24hrs: I now think this is probably wrong, and that responses have raised valid points.] You say”[others] rely on EA grants for their projects or EA organizations for obtaining jobs and therefore may be more hesitant to directly and publicly criticize authoritative organizations like CEA.” I could be wrong, but I have a pretty strong sense that nearly everyone I know with EA funding would be willing to criticise CEA if they had a good reason to. I’d be surprised if {being EA funded} decreased willingness to criticise EA orgs. I even expect the opposite to be true.
(Disclaimer that I’ve received funding from EA orgs)
Sorry that the tone of the above is harsh—I’m unsure if it’s too harsh or whether this is the appropriate space for this comment.
I’ve err-ed on the side of posting because it feels relevant and important.
I could be wrong, but I have a pretty strong sense that nearly everyone I know with EA funding would be willing to criticise CEA if they had a good reason to. I’d be surprised if {being EA funded} decreased willingness to criticise EA orgs. I even expect the opposite to be true.
I disagree, I know several people who fit this description (5 off the top of my head) who would find this very hard. I think it very much depends on factors like how well networked you are, where you live, how much funding you’ve received and for how long, and whether you think you could work for and org in the future.
When people already well-respected in the community criticise something in EA, it can often be a source of prestige and a display of their own ability to think independently. But if a relative newcomer were to suggest the very same criticisms, it will often be interpreted very differently. Other aspiring EAs might intuitively classify them as “normie” rather than “EA above the pack”.
So depending on where in the local status hierarchy you find yourself, you might have very different perceptions on how risky it is for community members in general to voice contrarian opinions.
The part about newcomers doesn’t reflect my experience FWIW, though my sample size is small. I published a major criticism while a relative newcomer (knew a handful of EAs, mostly online, was working as a teacher, certainly felt like I had no idea what I was doing). Though it wasn’t the goal of doing so, I think that criticism ended causing me to gain status, possibly (though it’s hard to assess accurately) more status that I think I “deserved” for writing it.
[I no longer feel like a newcomer so this is a cached impression from a couple of years ago and should therefore be taken with a pinch of salt]
I disagree. If anything EA has a problem that Alexrjl hinted at that you gain too much status for criticising EA. Scott Alexander’s recent post made me update in that direction.
(Sidenote: I gave your comment an upvote because I appreciate it, but an agreement downvote since I disagree. And it is just making me happy right now to see how useful explicitly seperating these two voting systems can be)
FWIW, I don’t feel like a newcomer and I write a lot of contrarian (but honest) comments. I don’t generally feel like being massively downvoted gains me status. I’m often afraid I’m lowering my chances of ever getting hired by an EA org.
There was a vague tone of “the goal is to get accepted to EAG” instead of “the goal is to make the world better,” which I felt a bit uneasy about when reading the post. EAGs are only useful in so far as they let community members to better work in the real world.
Hm, I understand why you say that, and you might be right (e.g., I see some signs of the OP that are compatible with this interpretation). Still, I want to point out that there’s a risk of being a bit uncharitable. It seems worth saying that anyone who cares a lot about having a lot of impact should naturally try hard to get accepted to EAG (assuming that they see concrete ways to benefit from it). Therefore, the fact that someone seems to be trying hard can also be evidence that EA is very important to them. Especially when you’re working on a cause area that is under-represented among EAG-attending EAs, like animal welfare, it might matter more (based on your personal moral and empirical views) to get invited.[1]
Compare the following two scenarios. If you’re the 130th applicant focused on trying out AI safety research and the conference committee decides that they think the AI safety conversations at the conference will be more productive without you in expectation because they think other candidates are better suited, you might react to these news in a saint-like way. You might think: “Okay, at least this means others get to reduce AI safety effectively, which benefits my understanding of doing the most good.” By contrast, imagine you get rejected as an advocate for animal welfare. In that situation, you might legitimately worry that your cause area – which you naturally could think is especially important at least according to your moral views and empirical views – ends up neglected. Accordingly, the saint-like reaction of “at least the conference will be impactful without me” doesn’t feel as appropriate (it might be more impactful based on other people’s moral and empirical views, but not necessarily yours). (That doesn’t mean that people from under-represented cause areas should be included just for the sake of better representation, nor that everyone with an empirical view that differs from what’s common in EA is entitled to have their perspective validated. I’m just pointing out that we can’t fault people from under-represented cause areas for thinking that it’s altruistically important for them to get invited – that’s what’s rational when you worry that the conference wouldn’t represent your cause area all that well otherwise. [Even so, I also think it’s important for everyone to be understanding of others’ perspectives on this. E.g., if lots of people don’t share your views, you simply can’t be too entitled about getting representation because a norm that gave all rare views a lot of representation would lead to a chaotic and scattered and low-quality conference. Besides, if your views or cause area are too uncommon, you may not benefit from the conference as much, anyway.]
I strongly agree with this. And your footnote example is also excellent-excellent. I don’t see why it isn’t obvious that Constance’s goal of getting into EAG is merely intrumental to her larger goal of making the world a better place (primarily for animal suffering since that is what she currently seems to believe is the world’s most pressing issue).
I’m aware of the form, and trying to think honestly about why I haven’t used it/don’t feel very motivated to. I think there’s a few reasons:
Simple akrasia. There’s quite a long list of stuff I could say, some quite subjective, some quite dated, some quite personal and therefore uncomfortable to raise since it feels uncomfortable criticising individuals. The logistics of figuring out which things are worth mentioning and which aren’t are quite a headache.
Direct self-interest. In practice the EA world is small enough that many things I could say couldn’t be submitted anonymously without key details removed. While I do believe that CEA are generally interested in feedback, it’s difficult to believe that, with the best will in the world, if I identify individuals in particularly strong ways and they’re still at the org, it doesn’t lower my expectation of good future interactions with them.
Indirect self-interest/social interest. I like everyone I’ve interacted with from CEA. Some of them I’d consider friends. I don’t want to sour any of those relationships.
Fellow-interest. Some of the issues I could identify relate to group interactions, some of which don’t actually involve me, but that I’m reasonably confident haven’t been submitted, presumably for similar reasons. I’m especially keen not to accidentally put anyone else in the firing line.
In general I think it’s much more effective to discuss issues publicly than anonymously (as this post does) - but that magnifies all the above concerns.
Lack of confidence that submitting feedback will lead to positive change. I could get over some of the above concerns if I were confident that submitting critical feedback would do some real good, but it’s hard to have that confidence—both because CEA employees are human, and therefore have status quo bias/a general instinct to rationalise bad actions, and because as I mentioned some of the issues are subjective or dated, and therefore might turn out not to be relevant any more, not to be reasonable on my end, or not to be resolveable for some other reason.
I realise this isn’t helpful on an object level, but perhaps it’s useful meta-feedback. The last point gives me an idea: large EA orgs could seek out feedback actively, by eg posting discussion threads on their best guess about ‘things people in the community might feel bad about re us’ with minimal commentary, at least in the OPs, and see if anyone takes the bait. Many of the above concerns would disappear or at least alleviate if it felt like I was just agreeing with a statement rather than submitting multiple whinges.
(ETA: I didn’t give you the agreement downvote, fwiw)
Thanks for sharing your reasons here! I definitely don’t think that this problem fully fixes this problem, and it’s helpful to hear how it’s falling short. Some reactions to your points:
Yeah, this makes sense.
Totally makes sense. I haven’t reflected deeply about whether I should offer to keep information shared in the form with other staff (currently I’m not offering this). On the one hand, this might help me to get more data. On the other hand, it seems good to be able to communicate transparently within the team, and I might be left wanting to act on information but unable to do so due to a confidentiality agreement. Maybe I should think about this more.
Again, totally makes sense.
Ditto.
I’m not so sure that it is better to discuss issues publicly—I think that it can make the discussion feel more high stakes in ways that make it harder to resolve. If you’re skeptical that we’ll act without public pressure, that seems like a reason to go public though (though I think maybe you should be less skeptical, see below).
I can see why you’d have this worry, and I think that outside-view we’re probably under-reacting to criticism a bit. FWIW, I did a quick, very rough categorization of the 18 responses I’ve got to the form so far.
I think that 2 were gibberish/spam (didn’t seem to refer to CEA or EA at all).
One was about an issue that had already been resolved by the time it was submitted.
One was generic positive feedback
Four were several-paragraph long comments sharing some takes on EA culture/extra projects we might take on. I think that these fed into my model of what’s important in various ways, and I have taken some actions as a result, but I don’t think I can confidently say “we acted on this” or “it’s resolved”.
Eight were reasonably specific bits of feedback (e.g. on particular bits of text on our websites, or saying that we were focusing too much on a program for reasons). Of these:
I think that we’ve straightforwardly resolved 6 of these (like they suggested we change some text, and the text is now different in the way that they suggested).
One is a bigger issue (mental health support for EAs): we’ve been working on this but I wouldn’t say it’s resolved.
One was based on a premise that I disagree with (and which they didn’t really argue for), so I didn’t make any change.
Two were a bit of a mix between d) and e), and said in part that they didn’t trust CEA to do certain things/about certain topics. My take is that we are doing the things that these people don’t trust us to do, but they probably still disagree. I don’t expect that I’ve resolved the trust issue that these people raise.
Meta:
Obviously I might be biased in my assessment of the above, you might not trust me.
My summary is that we’re probably pretty likely to fix specific feedback, but it’s harder to say whetheer we’ll respond effectively to more general feedback.
This all makes me think that maybe I should publicly respond to submissions (but also that could be a lot of work).
Thanks for the idea about writing comments that help people share their thoughts without getting into details.
Tone—I did worry that the tone might read like that. To me, getting into EAG was only instrumental for my greater goal of making the world a better place. I do have a tendency to focus a lot of energy into on perceived barriers to efficicacy so it might have come off like getting into EAG was my final objective. Please feel free to point out various parts of the post that seem to suggest otherwise and I can update them.
Making the world a better place—This is a really difficult thing to measure and there is not a lot of transparency around how they are measuring it. Part of why I made this post was to provide more data points to answer Eli’s other question of, “how costly is rejection?” That needs to be factored into calculating how much good EAG is producing. I just don’t think it is properly accounted for.
Hesitant to criticize—I would agree with Vaidehi and say that there are many factors to consider in how comfortable individuals are to criticize EA organizations. Just to add my own data point, there were a couple people that reviewed this post to that were hesitant to be identified in one way or another out of concern for a negative consequence in the future. Starting 1-2 weeks ago since I found out about my rejection, I have probably talked to 15-20 EA’s and about 80% have expressed wariness about saying/doing something that would upset a large EA organization.
It seems to me to be completely valid to acknowledge that there is a real cost to rejection that is felt by individuals at a very personal level. Part of this cost will be the utilitarian frustration of being thwarted from taking advantage of what one imagines could be a highly effective means of furthering one’s goals (i.e. doing good), but part of this cost for many people will be the very personal hurt of rejection, and that both can felt at the same time. We are social beings with identities and values that are rooted in and affirmed by community. This is the reason the philosophy of effective altruism has gained traction in calling itself a community and building institutions around that. Community is important to who we are, what we beleive, and what we do. If doing good is central to one’s sense of purpose and identity, and one has found in the EA community an identity and moral framework that provides a means through which one can live out one’s values, than a rejection handed down by the highly respected leaders of this community will be incredibly painful on a personal level. Our goals and values are inextricably tied up with our identities and relationships.
The psychological cost of rejection is real and i think potentially detrimental to the greater purpose of the organization in so far as it discourages and demotivates people who are driven by a common altruistic purpose, and contributes to a wider sense of the EA community as gated and exclusionary. To the extent that one cares about these costs, I do not see the gain in refusing to acknowledge that the psychological toll of rejection is real, is valid, and is intrinsically bound up with any more purely instrumental costs.
There was a vague tone of “the goal is to get accepted to EAG” instead of “the goal is to make the world better,” which I felt a bit uneasy about when reading the post. EAGs are only useful in so far as they let community members to better work in the real world.
Because of this, I don’t feel strongly about the EAG team providing feedback to people on why they were rejected. The EAG team’s goals isn’t to advise on how applicants can fill up their “EA resume.” It’s to facilitate impactful work in the world.
I remembered a comment that I really liked from Eli: “EAG exists to make the world a better place, rather than serve the EA community or make EAs happy.”
I roughly agree with these.
You say”[others] rely on EA grants for their projects or EA organizations for obtaining jobs and therefore may be more hesitant to directly and publicly criticize authoritative organizations like CEA.” I could be wrong, but I have a pretty strong sense that nearly everyone I know with EA funding would be willing to criticise CEA if they had a good reason to. I’d be surprised if {being EA funded} decreased willingness to criticise EA orgs. I even expect the opposite to be true.
(Disclaimer that I’ve received funding from EA orgs)
I’m not sure about this. I expect people relying on EA grants are reluctant to criticize authoritative orgs like CEA, particularly publicly and non-anonymously. I’d guess that they’re more reluctant than people not on EA grants, relative to the amount of useful criticism they could provide.
I think the first point is subtly wrong in an important way.
EAGs are not only useful in so far as they let community members do better work in the real world. EAGs are useful insofar as they result in a better world coming to be.
One way in which EAGs might make the world better is by fostering a sense of community, validation, and inclusion among those who have committed themselves to EA, thus motivating people to so commit themselves and to maintain such commitments. This function doesn’t bare on “letting” people do better work per se.
Insofar as this goal is an important component of EAG’s impact, it should be prioritized alongside more direct effects of the conference. EAG obviously exists to make the world a better place, but serving the EA community and making EAs happy is an important way in which EAG accomplishes this goal.
EDIT: Lukas Gloor does a much better job than me at getting across everything I wanted to in this comment here
There was a vague tone of “the goal is to get accepted to EAG” instead of “the goal is to make the world better,” which I felt a bit uneasy about when reading the post. EAGs are only useful in so far as they let community members to better work in the real world.
From my reading her goals are not simply get into EAG. It seems obvious to me that her goal to get into EAG is instrumental to the end of making the world a better place. The crux is not “Constance just wants to get into EAG.” The crux I think is Constance believes she can help make the world a better place much more through connecting with people at EAG. The CEA does not appear to believe this to be the case.
The crux should be the focus. Focusing on how badly she wants to get into EAG is a distraction.
“EAG exists to make the world a better place, rather than serve the EA community or make EAs happy.”
For many EAs you cannot have a well-run conference that makes the world a better place without it also being a place that makes many EAs very happy. I’d think the two goals are synonymous for a great many EAs.
In their comment Eli says:
This unfortunately sometimes means EAs will be sad due to decisions we’ve made — though if this results in the world being a worse place overall, then we’ve clearly made a mistake.
Let’s also remember that EAs that get rejected from EAG that believe their rejection resulted in the world being a worse place overall will also be sad—probably moreso because they get both the FOMO but also a deeper moral sting. In fact, they might be so sad it motivates them to write an EA Forum post about it in the hopes of making sure that the CEA didn’t make a mistake.
I like Eli’s comment. It captures something important. But I also don’t like it because it can also provide a false sense of clarity—seperating goals that aren’t actually always that seperate—and this false clarity can possibly provide a motivated reasoning basis that can be used to more easily believe the EAG admission process didn’t make a mistake and make the world a worse place. Why? Because it makes it easier to dismiss an EA that is very sad about being rejected from EAG as just someone who “wants to get into EAG.”
“EAG exists to make the world a better place, rather than serve the EA community or make EAs happy.”
I’m wary of this claim. Obviously in some top level sense it’s true, but it seems reminiscent of the paradox of hedonism, in that I can easily believe that if you consciously optimise events for abstract good-maximisation, you end up maximising good less than if you optimise them for the health of a community of do-gooders.
(I’m not saying this is a case for or against admitting the OP—it’s just my reaction to your reaction)
Sorry that your experience of this has been rough.
Some quick thoughts I had whilst reading:
There was a vague tone of “the goal is to get accepted to EAG” instead of “the goal is to make the world better,” which I felt a bit uneasy about when reading the post. EAGs are only useful in so far as they let community members to better work in the real world.
Because of this, I don’t feel strongly about the EAG team providing feedback to people on why they were rejected. The EAG team’s goals isn’t to advise on how applicants can fill up their “EA resume.” It’s to facilitate impactful work in the world.
I remembered a comment that I really liked from Eli: “EAG exists to make the world a better place, rather than serve the EA community or make EAs happy.”
[EDIT after 24hrs: I now think this is probably wrong, and that responses have raised valid points.] You say”[others] rely on EA grants for their projects or EA organizations for obtaining jobs and therefore may be more hesitant to directly and publicly criticize authoritative organizations like CEA.” I could be wrong, but I have a pretty strong sense that nearly everyone I know with EA funding would be willing to criticise CEA if they had a good reason to. I’d be surprised if {being EA funded} decreased willingness to criticise EA orgs. I even expect the opposite to be true.
(Disclaimer that I’ve received funding from EA orgs)
Sorry that the tone of the above is harsh—I’m unsure if it’s too harsh or whether this is the appropriate space for this comment.
I’ve err-ed on the side of posting because it feels relevant and important.
I disagree, I know several people who fit this description (5 off the top of my head) who would find this very hard. I think it very much depends on factors like how well networked you are, where you live, how much funding you’ve received and for how long, and whether you think you could work for and org in the future.
Here’s an anonymous form where people can criticize us, in case that helps.
When people already well-respected in the community criticise something in EA, it can often be a source of prestige and a display of their own ability to think independently. But if a relative newcomer were to suggest the very same criticisms, it will often be interpreted very differently. Other aspiring EAs might intuitively classify them as “normie” rather than “EA above the pack”.
So depending on where in the local status hierarchy you find yourself, you might have very different perceptions on how risky it is for community members in general to voice contrarian opinions.
The part about newcomers doesn’t reflect my experience FWIW, though my sample size is small. I published a major criticism while a relative newcomer (knew a handful of EAs, mostly online, was working as a teacher, certainly felt like I had no idea what I was doing). Though it wasn’t the goal of doing so, I think that criticism ended causing me to gain status, possibly (though it’s hard to assess accurately) more status that I think I “deserved” for writing it.
[I no longer feel like a newcomer so this is a cached impression from a couple of years ago and should therefore be taken with a pinch of salt]
I disagree. If anything EA has a problem that Alexrjl hinted at that you gain too much status for criticising EA. Scott Alexander’s recent post made me update in that direction.
(Sidenote: I gave your comment an upvote because I appreciate it, but an agreement downvote since I disagree. And it is just making me happy right now to see how useful explicitly seperating these two voting systems can be)
FWIW, I don’t feel like a newcomer and I write a lot of contrarian (but honest) comments. I don’t generally feel like being massively downvoted gains me status. I’m often afraid I’m lowering my chances of ever getting hired by an EA org.
Hm, I understand why you say that, and you might be right (e.g., I see some signs of the OP that are compatible with this interpretation). Still, I want to point out that there’s a risk of being a bit uncharitable. It seems worth saying that anyone who cares a lot about having a lot of impact should naturally try hard to get accepted to EAG (assuming that they see concrete ways to benefit from it). Therefore, the fact that someone seems to be trying hard can also be evidence that EA is very important to them. Especially when you’re working on a cause area that is under-represented among EAG-attending EAs, like animal welfare, it might matter more (based on your personal moral and empirical views) to get invited.[1]
Compare the following two scenarios. If you’re the 130th applicant focused on trying out AI safety research and the conference committee decides that they think the AI safety conversations at the conference will be more productive without you in expectation because they think other candidates are better suited, you might react to these news in a saint-like way. You might think: “Okay, at least this means others get to reduce AI safety effectively, which benefits my understanding of doing the most good.” By contrast, imagine you get rejected as an advocate for animal welfare. In that situation, you might legitimately worry that your cause area – which you naturally could think is especially important at least according to your moral views and empirical views – ends up neglected. Accordingly, the saint-like reaction of “at least the conference will be impactful without me” doesn’t feel as appropriate (it might be more impactful based on other people’s moral and empirical views, but not necessarily yours). (That doesn’t mean that people from under-represented cause areas should be included just for the sake of better representation, nor that everyone with an empirical view that differs from what’s common in EA is entitled to have their perspective validated. I’m just pointing out that we can’t fault people from under-represented cause areas for thinking that it’s altruistically important for them to get invited – that’s what’s rational when you worry that the conference wouldn’t represent your cause area all that well otherwise. [Even so, I also think it’s important for everyone to be understanding of others’ perspectives on this. E.g., if lots of people don’t share your views, you simply can’t be too entitled about getting representation because a norm that gave all rare views a lot of representation would lead to a chaotic and scattered and low-quality conference. Besides, if your views or cause area are too uncommon, you may not benefit from the conference as much, anyway.]
I strongly agree with this. And your footnote example is also excellent-excellent. I don’t see why it isn’t obvious that Constance’s goal of getting into EAG is merely intrumental to her larger goal of making the world a better place (primarily for animal suffering since that is what she currently seems to believe is the world’s most pressing issue).
I have received EA funding in multiple capacities, and feel quite constrained in my ability to criticise CEA publicly.
I’m sorry to hear that. Here’s an anonymous form, in case that helps.
I’m aware of the form, and trying to think honestly about why I haven’t used it/don’t feel very motivated to. I think there’s a few reasons:
Simple akrasia. There’s quite a long list of stuff I could say, some quite subjective, some quite dated, some quite personal and therefore uncomfortable to raise since it feels uncomfortable criticising individuals. The logistics of figuring out which things are worth mentioning and which aren’t are quite a headache.
Direct self-interest. In practice the EA world is small enough that many things I could say couldn’t be submitted anonymously without key details removed. While I do believe that CEA are generally interested in feedback, it’s difficult to believe that, with the best will in the world, if I identify individuals in particularly strong ways and they’re still at the org, it doesn’t lower my expectation of good future interactions with them.
Indirect self-interest/social interest. I like everyone I’ve interacted with from CEA. Some of them I’d consider friends. I don’t want to sour any of those relationships.
Fellow-interest. Some of the issues I could identify relate to group interactions, some of which don’t actually involve me, but that I’m reasonably confident haven’t been submitted, presumably for similar reasons. I’m especially keen not to accidentally put anyone else in the firing line.
In general I think it’s much more effective to discuss issues publicly than anonymously (as this post does) - but that magnifies all the above concerns.
Lack of confidence that submitting feedback will lead to positive change. I could get over some of the above concerns if I were confident that submitting critical feedback would do some real good, but it’s hard to have that confidence—both because CEA employees are human, and therefore have status quo bias/a general instinct to rationalise bad actions, and because as I mentioned some of the issues are subjective or dated, and therefore might turn out not to be relevant any more, not to be reasonable on my end, or not to be resolveable for some other reason.
I realise this isn’t helpful on an object level, but perhaps it’s useful meta-feedback. The last point gives me an idea: large EA orgs could seek out feedback actively, by eg posting discussion threads on their best guess about ‘things people in the community might feel bad about re us’ with minimal commentary, at least in the OPs, and see if anyone takes the bait. Many of the above concerns would disappear or at least alleviate if it felt like I was just agreeing with a statement rather than submitting multiple whinges.
(ETA: I didn’t give you the agreement downvote, fwiw)
Thanks for sharing your reasons here! I definitely don’t think that this problem fully fixes this problem, and it’s helpful to hear how it’s falling short. Some reactions to your points:
Yeah, this makes sense.
Totally makes sense. I haven’t reflected deeply about whether I should offer to keep information shared in the form with other staff (currently I’m not offering this). On the one hand, this might help me to get more data. On the other hand, it seems good to be able to communicate transparently within the team, and I might be left wanting to act on information but unable to do so due to a confidentiality agreement. Maybe I should think about this more.
Again, totally makes sense.
Ditto.
I’m not so sure that it is better to discuss issues publicly—I think that it can make the discussion feel more high stakes in ways that make it harder to resolve. If you’re skeptical that we’ll act without public pressure, that seems like a reason to go public though (though I think maybe you should be less skeptical, see below).
I can see why you’d have this worry, and I think that outside-view we’re probably under-reacting to criticism a bit. FWIW, I did a quick, very rough categorization of the 18 responses I’ve got to the form so far.
I think that 2 were gibberish/spam (didn’t seem to refer to CEA or EA at all).
One was about an issue that had already been resolved by the time it was submitted.
One was generic positive feedback
Four were several-paragraph long comments sharing some takes on EA culture/extra projects we might take on. I think that these fed into my model of what’s important in various ways, and I have taken some actions as a result, but I don’t think I can confidently say “we acted on this” or “it’s resolved”.
Eight were reasonably specific bits of feedback (e.g. on particular bits of text on our websites, or saying that we were focusing too much on a program for reasons). Of these:
I think that we’ve straightforwardly resolved 6 of these (like they suggested we change some text, and the text is now different in the way that they suggested).
One is a bigger issue (mental health support for EAs): we’ve been working on this but I wouldn’t say it’s resolved.
One was based on a premise that I disagree with (and which they didn’t really argue for), so I didn’t make any change.
Two were a bit of a mix between d) and e), and said in part that they didn’t trust CEA to do certain things/about certain topics. My take is that we are doing the things that these people don’t trust us to do, but they probably still disagree. I don’t expect that I’ve resolved the trust issue that these people raise.
Meta:
Obviously I might be biased in my assessment of the above, you might not trust me.
My summary is that we’re probably pretty likely to fix specific feedback, but it’s harder to say whetheer we’ll respond effectively to more general feedback.
This all makes me think that maybe I should publicly respond to submissions (but also that could be a lot of work).
Thanks for the idea about writing comments that help people share their thoughts without getting into details.
Wow, thanks so much for sharing this publicly!
I also wanna give general encouragement for sharing a difficult rejection story.
Hi Evie,
I appreciate that you decided to post this.
Tone—I did worry that the tone might read like that. To me, getting into EAG was only instrumental for my greater goal of making the world a better place. I do have a tendency to focus a lot of energy into on perceived barriers to efficicacy so it might have come off like getting into EAG was my final objective. Please feel free to point out various parts of the post that seem to suggest otherwise and I can update them.
Making the world a better place—This is a really difficult thing to measure and there is not a lot of transparency around how they are measuring it. Part of why I made this post was to provide more data points to answer Eli’s other question of, “how costly is rejection?” That needs to be factored into calculating how much good EAG is producing. I just don’t think it is properly accounted for.
Hesitant to criticize—I would agree with Vaidehi and say that there are many factors to consider in how comfortable individuals are to criticize EA organizations. Just to add my own data point, there were a couple people that reviewed this post to that were hesitant to be identified in one way or another out of concern for a negative consequence in the future. Starting 1-2 weeks ago since I found out about my rejection, I have probably talked to 15-20 EA’s and about 80% have expressed wariness about saying/doing something that would upset a large EA organization.
It seems to me to be completely valid to acknowledge that there is a real cost to rejection that is felt by individuals at a very personal level. Part of this cost will be the utilitarian frustration of being thwarted from taking advantage of what one imagines could be a highly effective means of furthering one’s goals (i.e. doing good), but part of this cost for many people will be the very personal hurt of rejection, and that both can felt at the same time. We are social beings with identities and values that are rooted in and affirmed by community. This is the reason the philosophy of effective altruism has gained traction in calling itself a community and building institutions around that. Community is important to who we are, what we beleive, and what we do. If doing good is central to one’s sense of purpose and identity, and one has found in the EA community an identity and moral framework that provides a means through which one can live out one’s values, than a rejection handed down by the highly respected leaders of this community will be incredibly painful on a personal level. Our goals and values are inextricably tied up with our identities and relationships.
The psychological cost of rejection is real and i think potentially detrimental to the greater purpose of the organization in so far as it discourages and demotivates people who are driven by a common altruistic purpose, and contributes to a wider sense of the EA community as gated and exclusionary. To the extent that one cares about these costs, I do not see the gain in refusing to acknowledge that the psychological toll of rejection is real, is valid, and is intrinsically bound up with any more purely instrumental costs.
I roughly agree with these.
I’m not sure about this. I expect people relying on EA grants are reluctant to criticize authoritative orgs like CEA, particularly publicly and non-anonymously. I’d guess that they’re more reluctant than people not on EA grants, relative to the amount of useful criticism they could provide.
I think the first point is subtly wrong in an important way.
EAGs are not only useful in so far as they let community members do better work in the real world. EAGs are useful insofar as they result in a better world coming to be.
One way in which EAGs might make the world better is by fostering a sense of community, validation, and inclusion among those who have committed themselves to EA, thus motivating people to so commit themselves and to maintain such commitments. This function doesn’t bare on “letting” people do better work per se.
Insofar as this goal is an important component of EAG’s impact, it should be prioritized alongside more direct effects of the conference. EAG obviously exists to make the world a better place, but serving the EA community and making EAs happy is an important way in which EAG accomplishes this goal.
EDIT: Lukas Gloor does a much better job than me at getting across everything I wanted to in this comment here
From my reading her goals are not simply get into EAG. It seems obvious to me that her goal to get into EAG is instrumental to the end of making the world a better place. The crux is not “Constance just wants to get into EAG.” The crux I think is Constance believes she can help make the world a better place much more through connecting with people at EAG. The CEA does not appear to believe this to be the case.
The crux should be the focus. Focusing on how badly she wants to get into EAG is a distraction.
For many EAs you cannot have a well-run conference that makes the world a better place without it also being a place that makes many EAs very happy. I’d think the two goals are synonymous for a great many EAs.
In their comment Eli says:
Let’s also remember that EAs that get rejected from EAG that believe their rejection resulted in the world being a worse place overall will also be sad—probably moreso because they get both the FOMO but also a deeper moral sting. In fact, they might be so sad it motivates them to write an EA Forum post about it in the hopes of making sure that the CEA didn’t make a mistake.
I like Eli’s comment. It captures something important. But I also don’t like it because it can also provide a false sense of clarity—seperating goals that aren’t actually always that seperate—and this false clarity can possibly provide a motivated reasoning basis that can be used to more easily believe the EAG admission process didn’t make a mistake and make the world a worse place. Why? Because it makes it easier to dismiss an EA that is very sad about being rejected from EAG as just someone who “wants to get into EAG.”
I’m wary of this claim. Obviously in some top level sense it’s true, but it seems reminiscent of the paradox of hedonism, in that I can easily believe that if you consciously optimise events for abstract good-maximisation, you end up maximising good less than if you optimise them for the health of a community of do-gooders.
(I’m not saying this is a case for or against admitting the OP—it’s just my reaction to your reaction)