One tension in meta spending is that it always needs to be squared with the principle of impartiality. Why is a valuable benefit being offered to EAs only, not to those with the greatest ability to benefit in the general population?
The most common justification is that investing in EAs ultimately yields more object-level impact, making the benefit to the individual EA only an incidental benefit. For this rationale, I might add a couple of italicized modifications to one of your statements to get: As a general matter, EAs’ individual wellbeing is only a proper subject of special community spending[1] inasmuch as it contributes to their impact and productivity. That’s uncomfortable to say, and I think care must be taken to avoid sending the message that one’s wellbeing only has instrumental value.
At the same time, I think it’s critical to clearly link the justification for individual-benefitting meta activities to impartial ends.[2] Too many charitable endeavors have slowly turned away from their original focus into devoting a bunch of energy providing benefits for insiders. In my view, it’s important to keep far away from that pathway. EA has chosen to heavily embrace meta pathways to impact, which poses heightened dangers of treading down the path of insider capture, and so warrants particular care in clearly identifying how programs that individually benefit insiders are nevertheless impartial.
By “special community spending,” I mean to exclude things like employee health insurance, which at least in the US is a form of compensation for services rendered.
There could be other impartiality-approved rationales for a program benefitting individuals, such as a need to address a harm caused by certain types of EA-related actions/actors, or a harm incurred “in the line of EA duty” (broadly construed). Mental health issues stemming from worries about impact could be seen as the latter.
One tension in meta spending is that it always needs to be squared with the principle of impartiality. Why is a valuable benefit being offered to EAs only, not to those with the greatest ability to benefit in the general population?
The most common justification is that investing in EAs ultimately yields more object-level impact, making the benefit to the individual EA only an incidental benefit. For this rationale, I might add a couple of italicized modifications to one of your statements to get: As a general matter, EAs’ individual wellbeing is only a proper subject of special community spending [1] inasmuch as it contributes to their impact and productivity. That’s uncomfortable to say, and I think care must be taken to avoid sending the message that one’s wellbeing only has instrumental value.
At the same time, I think it’s critical to clearly link the justification for individual-benefitting meta activities to impartial ends.[2] Too many charitable endeavors have slowly turned away from their original focus into devoting a bunch of energy providing benefits for insiders. In my view, it’s important to keep far away from that pathway. EA has chosen to heavily embrace meta pathways to impact, which poses heightened dangers of treading down the path of insider capture, and so warrants particular care in clearly identifying how programs that individually benefit insiders are nevertheless impartial.
By “special community spending,” I mean to exclude things like employee health insurance, which at least in the US is a form of compensation for services rendered.
There could be other impartiality-approved rationales for a program benefitting individuals, such as a need to address a harm caused by certain types of EA-related actions/actors, or a harm incurred “in the line of EA duty” (broadly construed). Mental health issues stemming from worries about impact could be seen as the latter.