Thanks Caroline for writing this! I think it’s a really rich vein to mine because it pulls together several threads I’ve been thinking a lot about lately.
One issue it raises is should we care about the “altruist” in effective altruists? If someone is doing really useful things because they think FTX will pay them a lot of money or fund their political ambitions, is this good because useful things happen or bad because they won’t be a trustworthy agent for EA when put into positions of power? My instinct is to prefer giving people good incentives than selecting people who are virtuous: I think virtue tends to be very situationally dependent and that very admirable people can do bad things and self-deceive if it’s in their interest to do so. But it’s obviously not either-or. I also tend to have fairly bourgeois personal preferences and think EA should aspire to universality such that lots of adherents can be materially prosperous and conventionally successful and either donate ~20% of their income or work/volunteer for a useful cause (a sort of prosperity gospel form of EA amenable to wide swathes of the professional and working class rather than a self-sacrifice form that could be more pure).
A separate issue is one of community health. So on an individual level maybe it could be fine if people join EA because the retreats are lit and the potential for power and status is high, but as a group there may be some like tipping point where people’s self-identity changes as the community in fact prizes the perks and status over results. This could especially be a concern insofar as 1. goals that are far off make it easy to self-deceive about progress and 2. building the EA community can be seen as an end in itself in a way that risks circularity and self-congratulation. You can say the solution here is to really elevate people who do in fact achieve good results (because achieving good things for the world is what we care about), but lots of results take a long time to unfold (even for “near-termist” causes) and are uncertain (e.g. Open Phil’s monetary policy and criminal justice reform work, both of which I admire and think have been positive). For example, while I’ve been in the Bahamas, people have been very complementary of 1Day Sooner (where I work and which I think EAs tend to see as a success story). I’m proud of my work at 1Day and hopeful what we’ve already done is expanding the use of challenge studies to develop better vaccines, but despite achieving some intermediate procedural successes (positive press coverage, some government buy-in and policy choices, some academic and bioethics work), I think the jury is very much still out on what our impact will end up being and most of our impact will likely come from future work.
The point about self-identity and developing one’s moral personhood really drives me in a direction of wanting to encourage people to make altruist choices that are significant and legible to themselves and others. For example, becoming a kidney donor made me identify myself more with the desire to have an impact which led me further into doing EA types of work. I think the norm of donating a significant portion of your income to charity is an important one for this reason, and I’ve been disappointed to see that norm weaken in recent years. I do worry that some of the types of self-sacrificing behavior you mention aren’t legible enough or state change-y enough to have this permanent character/self-identity building effect.
There’s an obvious point here about PR and I do think committing to behavior that we’re proud to display in public is an important principle (though not one that I think necessarily cuts against paying EAs a lot). First, public display is epistemically valuable because (a) it unearths criticisms and ideas an insular community won’t necessarily generate and (b) views that have overlapping consensus among diverse audiences are more likely to be true. Second, hiding things isn’t a sustainable strategy and also looks bad on its own terms.
Last thought that is imperfectly related is I do think there may be a bit of a flaw in EA considering meta-level community building on the same plane as object-level work and this might be a driving a bit of inflation in meta-level activities that manifests itself in opulent EA college resources (and maybe some other things) that are intuitively jarring even as they can seem intellectually justified. So if you consider object and meta-level stuff on the same plane, the $1 invested in recruiting EAs who then eventually spend $10 and recruit more EAs seems like an amazing investment (way better than spending that $1 on an actual EA object level activity). But this seems intuitively to me like it’s missing something and discounting the object level $ for the $ spent on the meta-level needed for fundraising doesn’t seem to satisfy the problem. I’m not sure but I think the issue (and this also applies to other power-seeking behavior like political fundraising) is that the community building is self-serving (not “altruistic”) and from an view outside of EA does not seem morally praiseworthy. We could take the position that that outside view is simply wrong insofar as it doesn’t take into the account the possibility that we are in fact right about our movement being right. The Ponzi-ishness of the whole thing doesn’t quite sit well, but I haven’t come to a well-reasoned view.
Thanks Caroline for writing this! I think it’s a really rich vein to mine because it pulls together several threads I’ve been thinking a lot about lately.
One issue it raises is should we care about the “altruist” in effective altruists? If someone is doing really useful things because they think FTX will pay them a lot of money or fund their political ambitions, is this good because useful things happen or bad because they won’t be a trustworthy agent for EA when put into positions of power? My instinct is to prefer giving people good incentives than selecting people who are virtuous: I think virtue tends to be very situationally dependent and that very admirable people can do bad things and self-deceive if it’s in their interest to do so. But it’s obviously not either-or. I also tend to have fairly bourgeois personal preferences and think EA should aspire to universality such that lots of adherents can be materially prosperous and conventionally successful and either donate ~20% of their income or work/volunteer for a useful cause (a sort of prosperity gospel form of EA amenable to wide swathes of the professional and working class rather than a self-sacrifice form that could be more pure).
A separate issue is one of community health. So on an individual level maybe it could be fine if people join EA because the retreats are lit and the potential for power and status is high, but as a group there may be some like tipping point where people’s self-identity changes as the community in fact prizes the perks and status over results. This could especially be a concern insofar as 1. goals that are far off make it easy to self-deceive about progress and 2. building the EA community can be seen as an end in itself in a way that risks circularity and self-congratulation. You can say the solution here is to really elevate people who do in fact achieve good results (because achieving good things for the world is what we care about), but lots of results take a long time to unfold (even for “near-termist” causes) and are uncertain (e.g. Open Phil’s monetary policy and criminal justice reform work, both of which I admire and think have been positive). For example, while I’ve been in the Bahamas, people have been very complementary of 1Day Sooner (where I work and which I think EAs tend to see as a success story). I’m proud of my work at 1Day and hopeful what we’ve already done is expanding the use of challenge studies to develop better vaccines, but despite achieving some intermediate procedural successes (positive press coverage, some government buy-in and policy choices, some academic and bioethics work), I think the jury is very much still out on what our impact will end up being and most of our impact will likely come from future work.
The point about self-identity and developing one’s moral personhood really drives me in a direction of wanting to encourage people to make altruist choices that are significant and legible to themselves and others. For example, becoming a kidney donor made me identify myself more with the desire to have an impact which led me further into doing EA types of work. I think the norm of donating a significant portion of your income to charity is an important one for this reason, and I’ve been disappointed to see that norm weaken in recent years. I do worry that some of the types of self-sacrificing behavior you mention aren’t legible enough or state change-y enough to have this permanent character/self-identity building effect.
There’s an obvious point here about PR and I do think committing to behavior that we’re proud to display in public is an important principle (though not one that I think necessarily cuts against paying EAs a lot). First, public display is epistemically valuable because (a) it unearths criticisms and ideas an insular community won’t necessarily generate and (b) views that have overlapping consensus among diverse audiences are more likely to be true. Second, hiding things isn’t a sustainable strategy and also looks bad on its own terms.
Last thought that is imperfectly related is I do think there may be a bit of a flaw in EA considering meta-level community building on the same plane as object-level work and this might be a driving a bit of inflation in meta-level activities that manifests itself in opulent EA college resources (and maybe some other things) that are intuitively jarring even as they can seem intellectually justified. So if you consider object and meta-level stuff on the same plane, the $1 invested in recruiting EAs who then eventually spend $10 and recruit more EAs seems like an amazing investment (way better than spending that $1 on an actual EA object level activity). But this seems intuitively to me like it’s missing something and discounting the object level $ for the $ spent on the meta-level needed for fundraising doesn’t seem to satisfy the problem. I’m not sure but I think the issue (and this also applies to other power-seeking behavior like political fundraising) is that the community building is self-serving (not “altruistic”) and from an view outside of EA does not seem morally praiseworthy. We could take the position that that outside view is simply wrong insofar as it doesn’t take into the account the possibility that we are in fact right about our movement being right. The Ponzi-ishness of the whole thing doesn’t quite sit well, but I haven’t come to a well-reasoned view.