This is a response to D0TheMath, quinn, and Larks, who all raise some version of this epistemic concern:
(1) Showing how EA is compatible with leftist principles requires being disingenuous about EA ideas —> (2) recruit people who join solely based on framing/language —> (3) people join the community who don’t really understand what EA is about —> (4) confusion!
The reason I am not concerned about this line of argumentation is that I don’t think it attends to the ways people decide whether to become more involved in EA.
(2) In my experience, people are most likely to drop out of the fellowship during the first few weeks, while they’re figuring out their schedules for the term and weighing whether to make the program one of their commitments. During this period, I think newcomers are easily turned off by the emphasis on quantification and triage. The goal is to find common ground on ideas with less inferential distance so fellows persevere through this period of discomfort and uncertainty. To earn yourself some weirdness points that you can spend in the weeks to come, eg when introducing X risks. So people don’t join solely based on framing/language; rather, these are techniques to extend a minimal degree of familiarity to smart and reasonable people who would otherwise fail to give the fellowship a chance.
(3) I think it’s very difficult to maintain inaccurate beliefs about EA for long. These will be dispelled as the fellowship continues and students read more EA writing, as they continue on to an in-depth fellowship, as they begin their own exploration of the forum, and as they talk to other students who are deeper in the EA fold. Note that all of these generally occur prior to attending EAG or applying for an EA internship/job, so I think it is likely that they will be unretained before triggering the harms of confusion in the broader community.
(I’m also not conceding (1), but it’s not worth getting into here.)
This is a response to D0TheMath, quinn, and Larks, who all raise some version of this epistemic concern:
(1) Showing how EA is compatible with leftist principles requires being disingenuous about EA ideas —> (2) recruit people who join solely based on framing/language —> (3) people join the community who don’t really understand what EA is about —> (4) confusion!
The reason I am not concerned about this line of argumentation is that I don’t think it attends to the ways people decide whether to become more involved in EA.
(2) In my experience, people are most likely to drop out of the fellowship during the first few weeks, while they’re figuring out their schedules for the term and weighing whether to make the program one of their commitments. During this period, I think newcomers are easily turned off by the emphasis on quantification and triage. The goal is to find common ground on ideas with less inferential distance so fellows persevere through this period of discomfort and uncertainty. To earn yourself some weirdness points that you can spend in the weeks to come, eg when introducing X risks. So people don’t join solely based on framing/language; rather, these are techniques to extend a minimal degree of familiarity to smart and reasonable people who would otherwise fail to give the fellowship a chance.
(3) I think it’s very difficult to maintain inaccurate beliefs about EA for long. These will be dispelled as the fellowship continues and students read more EA writing, as they continue on to an in-depth fellowship, as they begin their own exploration of the forum, and as they talk to other students who are deeper in the EA fold. Note that all of these generally occur prior to attending EAG or applying for an EA internship/job, so I think it is likely that they will be unretained before triggering the harms of confusion in the broader community.
(I’m also not conceding (1), but it’s not worth getting into here.)