Forum? I’m against ’em!
utilistrutil
Thanks for this post! I’m wondering what social change efforts you find most promising?
Oh I see! Ya, crazy stuff. I liked the attention it paid to the role of foundation funding. I’ve seen this critique of foundations included in some intro fellowships, so I wonder if it would also especially resonate with leftists who are fed up with cancel culture in light of the Intercept piece.
I don’t think anything here attempts a representation of “the situation in leftist orgs” ? But yes lol same
This is a response to D0TheMath, quinn, and Larks, who all raise some version of this epistemic concern:
(1) Showing how EA is compatible with leftist principles requires being disingenuous about EA ideas —> (2) recruit people who join solely based on framing/language —> (3) people join the community who don’t really understand what EA is about —> (4) confusion!
The reason I am not concerned about this line of argumentation is that I don’t think it attends to the ways people decide whether to become more involved in EA.
(2) In my experience, people are most likely to drop out of the fellowship during the first few weeks, while they’re figuring out their schedules for the term and weighing whether to make the program one of their commitments. During this period, I think newcomers are easily turned off by the emphasis on quantification and triage. The goal is to find common ground on ideas with less inferential distance so fellows persevere through this period of discomfort and uncertainty. To earn yourself some weirdness points that you can spend in the weeks to come, eg when introducing X risks. So people don’t join solely based on framing/language; rather, these are techniques to extend a minimal degree of familiarity to smart and reasonable people who would otherwise fail to give the fellowship a chance.
(3) I think it’s very difficult to maintain inaccurate beliefs about EA for long. These will be dispelled as the fellowship continues and students read more EA writing, as they continue on to an in-depth fellowship, as they begin their own exploration of the forum, and as they talk to other students who are deeper in the EA fold. Note that all of these generally occur prior to attending EAG or applying for an EA internship/job, so I think it is likely that they will be unretained before triggering the harms of confusion in the broader community.
(I’m also not conceding (1), but it’s not worth getting into here.)
- Aug 14, 2022, 9:21 PM; 1 point) 's comment on How to Talk to Lefties in Your Intro Fellowship by (
- Aug 14, 2022, 9:21 PM; 1 point) 's comment on How to Talk to Lefties in Your Intro Fellowship by (
- Aug 14, 2022, 9:21 PM; 1 point) 's comment on How to Talk to Lefties in Your Intro Fellowship by (
Ya maybe if your fellows span a broad political spectrum, then you risk alienating some and you have to prioritize. But the way these conversations actually go in my experience is that one fellow raises an objection, eg “I don’t trust charities to have the best interests of the people they serve at heart.” And then it falls to the facilitator to respond to this objection, eg “yes, PlayPumps illustrates this exact problem, and EA is interested in improving these standards so charities are actually accountable to the people they serve,” etc.
My sense is that the other fellows during this interaction will listen respectfully, but they will understand that the interaction is a response to one person’s idiosyncratic qualms, and that the facilitator is tailoring their response to that person’s perspective. The interaction is circumscribed by that context, and the other fellows don’t come away with the impression that EA only cares about accountability. In other words, the burden of representation is suspended somewhat in these interactions.
If we were writing an Intro to EA Guide, for example, I think we would have to be much more careful about the political bent of our language because the genre would be different.
I agree with quinn. I’m not sure what the mechanism is by which we end up with lowered epistemic standards. If an intro fellow is the kind of person who weighs reparative obligations very heavily in their moral calculus, then deworming donations may very well satisfy this obligation for them. This is not an argument that motivates me very much, but it may still be a true argument. And making true arguments doesn’t seem bad for epistemics? Especially at the point where you might be appealing to people who are already consequentialists, just consequentialists with a developed account of justice that attends to reparative obligations.
Thanks for the reply! I’m satisfied with your answer and appreciate the thought you’ve put into this area :) I do have a couple follow-ups if you have a chance to share further:
I expected to hear about the value of the connections made at EAG, but I’m not sure how to think about the counterfactual here. Surely some people choose to meet up at EAG but in the absence of the conference would have connected virtually, for example?
I also wonder about the cause areas of the EA-aligned orgs you cited. Ie, I could imagine longtermist orgs that are more talent-constrained estimating higher dollar value for a connection than, say, a global health org that is more funding-constrained. So I think EAs with different priorities might have different bliss points for conference funding levels.
It also seems like there might be tension between more veteran vs newcomer EAs? Eg, people who have been in the fold for longer might prefer simpler arrangements. In particular, I worry about pandering to “potential donors.” Who are these donors who are unaligned to the extent that their conference experience will determine the size of their future donations? Even if they do exist, this seems like a reason to have a “VIP ticket” or something.
Ultimately, the conference budget is one lens that raises the question, who is EAG for? And I wonder if that question is resolved in favor of longtermist orgs and new donors, at least right now.
The second point implies more of a bright line than scalar dynamic, which seems consistent with scope insensitivity over lower donation amounts. That is, we might expect scope insensitivity to equalize the perception of $1m and $5m dollars, but once you hit $10m, then you attract negative media coverage. If we restrict ourselves to donation sizes that allow us to fly under the radar of national media outlets, then the scope insensitivity argument may still bite.
EAG SF Was Too Boujee.
I have no idea what the finances for the event looked like, but I’ll assume the best case that CEA at least broke even.
The conference seemed extravagant to me. We don’t need so much security or staff walking around to collect our empty cups. How much money was spent to secure an endless flow of wine? There were piles of sweaters left at the end; attendees could opt in with their sizes ahead of time to calibrate the order.
Particularly in light of recent concerns about greater funding, it would behoove us to consider the harms of an opulent EAG to our optics, culture, and values. And even if EAGs are self-sustaining, we should still be vigilant regarding the opportunity cost of the money spent on a conference ticket. An attendee seems more likely to fund their ticket out of their “donations bucket” than their “white wine and cheesecake bucket.”
I’m not saying we need maximal asceticism; I’m sure there are large benefits to a comfortable conference experience in a good venue. But as a critical thread in the fabric of our community, EAG presents a unique opportunity for us to practice and affirm our values. We can do better.
A couple qualifications: I’ve only been to a couple non-EA conferences. Maybe conferences are generally quite fancy, and the EAG organizers are anchoring to a standard I’m not familiar with. Second, I have great faith in CEA, and I would not be surprised if they face non-negotiable requirements—eg with respect to personnel—imposed by the city or venue.
Hi Ann! Congratulations on this excellent piece :)
I want to bring up a portion I disagreed with and then address another section that really struck me. The former is:
Of course, co-benefits only affect the importance of an issue and don’t affect tractability or neglectedness. Therefore, they may not affect marginal cost-effectiveness.
I think I disagree with this for two reasons:
Improving the magnitude of impact while holding tractability and neglectedness constant would increase impact on the margin, ie, if we revise our impact estimates upwards at every possible level of funding, then climate change efforts become more cost-effective.
It seems like considering co-benefits does affect tractability, but the tractability of these co-benefit issue areas, rather than of climate change per se. Eg, addressing energy poverty becomes more tractable as we discover effective interventions to address it.
The section that struck me was:
climate change is somewhat unique in that its harms are horrible and have time-limited solutions; the growth rate of the harms is larger, and the longer we wait to solve them the less we will be able to do.
To be fair, other x-risks are also time-limited. Eg if nuclear war is currently going to happen in years, then by next year we will only have years left to solve it. The same holds for a catastrophic AI event. It seems like ~the nuance~ is that in the climate change case, tractability diminishes the longer we wait, as well as the timeframe. Compared to the AI case, for example, where the risk itself is unclear, I think this weighing makes climate change mitigation much more attractive.
Thanks for a great read!
Why is “people decide to lock in vast nonhuman suffering” an example of failed continuation in the last diagram?
EMs?