I disagree with this for two reasons. First, it’s odd to me to categorize political advertising as “direct impact” but short-term spending on poverty or disease as “reputational.” There is overlap in both cases; but if we must categorize I think it’s closer to the opposite. Short-term, RCT-backed spending is the most direct impact EA knows how to confidently make. And is not the entire project of engaging with electoral politics one of managing reputations?
To fund a political campaign is to attempt to popularize a candidate and their ideas; that is, to improve their reputation. That only works at all if you’re deeply in tune with which of our ideas are political winners, and which are less so. It only works if you’re sensitive to what the media will say. If selectively highlighting our most popular causes seems disingenuous, manipulative, or self-defeating to an impression of integrity, I hear you—but that’s hardly a case FOR political advertising. To support what SBF’s doing in the first place starts by accepting that, at least to some extent, framing EA in a way the mainstream can get behind instrumentally overlaps with “doing things because we think they’re right.”
If you accept that reputation matters, why is optimizing for an impression of greater integrity better than optimizing for an impression of greater altruism? In both cases, we’re just trying to anticipate and strategically preempt a misconception people may have about our true motivations. It just boils down to which misconception you think is empirically more common or dangerous.
My second and broader worry is that EA may be entering the most dangerous reputational period of its existence to date. I’m planning a standalone post on this soon, so I won’t elaborate too much on why I think this here. But the surge of recent posts you mention suggests I’m not alone; and if we’re right, high-level PR mindfulness could be more important now than ever before. EA’s reputation is important for long-term impact, especially if you think (as SBF appears to) that some of the most important X-risk reductions will have to come from within democratic governments.
First, it’s odd to me to categorize political advertising as “direct impact” but short-term spending on poverty or disease as “reputational.”
The OP focused on PR/reputation, which is what I reacted to.
If you accept that reputation matters, why is optimizing for an impression of greater integrity better than optimizing for an impression of greater altruism? In both cases, we’re just trying to anticipate and strategically preempt a misconception people may have about our true motivations.
I think there’s a difference between creating a reputation for integrity by actually behaving with integrity, and creating a reputation of being helpful by doing something that you actually don’t think is the most helpful thing that you can do.
If you are a consequentialist, then incorporating the consequences of reputation into your cost-benefit assessment is “actually behaving with integrity.” Why is it more honest—or even perceived as more honest—for SBF to exempt reputational consequences from what he thinks is most helpful?
Insofar as SBF’s reputation and EA’s reputation are linked, I agree with you (and disagree with OP) that it could be seen as cynical and hypocritical for SBF to suddenly focus on American beneficiaries in particular. These have never otherwise been EA priorities, so he would be transparently buying popularity. But I don’t think funding GiveWell’s short-term causes—nor even funding them more than you otherwise would for reputational reasons—is equally hypocritical in a way that suggests a lack of integrity. These are still among the most helpful things our community has identified. They are heavily funded by OpenPhilanthropy and by a huge portion of self-identified EAs, even apart from their reputational benefits. Many, both inside and outside the movement, see malaria bednets as the quintessential EA intervention. Nobody outside the movement would see that as a betrayal of EA principles.
Insofar as EA and SBF’s reputations are severable, perhaps it doesn’t matter what’s quintessentially EA, because “EA principles” are broader than SBF’s personal priorities. But in that case, because SBF’s personal priorities incline him towards political activism on longtermism, they should also incline him towards reputation management. Caring about things with instrumental value to protecting the future should not be seen as a dishonest deviation from longtermist beliefs, because it isn’t!
In another context, doing broadly popular and helpful things you “actually don’t think are the most helpful” might just be called hedging against moral uncertainty. Responsiveness to social pressure on altruists’ moral priorities is a humble admission that our niche and esoteric movement may have blind spots. It’s also, again, what representative politics are all about. If we want to literally help govern the country, we must be inclusive. We must convey that we are not here to evangelize to the ignorant masses, but are self-aware enough to incorporate their values. So if there’s a broad bipartisan belief that the very rich have obligations to the poor, SBF may have to validate that if he wants to be seen as altruistic elsewhere.
(I’m in a rush, so apologies if the above rambles).
Thank you, this is helpful. I do agree with you that there is a difference between supporting GiveWell-recommended charities and supporting American beneficiaries. More generally, my argument wasn’t directly about what donations Sam Bankman-Fried or other effective altruists should make, but rather about what arguments are brought to bear on that issue. Insofar as an analysis of direct impact suggests that certain charities should be funded, I obviously have no objection to that. My comment rather concerned the fact that the OP, in my view, put too much emphasis on reputational considerations relative to direct impact. (And I think this has been a broader pattern on the forum lately, which is part of the reason I thought it was worth pointing out.)
I didn’t focus on it in this post, but I genuinely think that the most helpful thing to do involves showing proficiency in achieving near-term goals, as that both allows us to troubleshoot potential practical issues, and allows outsiders to evaluate our track record. Part of showing integrity is showing transparency (assuming that we want outside support), and working on neartermist causes allows us to more easily do that.
I disagree with this for two reasons. First, it’s odd to me to categorize political advertising as “direct impact” but short-term spending on poverty or disease as “reputational.” There is overlap in both cases; but if we must categorize I think it’s closer to the opposite. Short-term, RCT-backed spending is the most direct impact EA knows how to confidently make. And is not the entire project of engaging with electoral politics one of managing reputations?
To fund a political campaign is to attempt to popularize a candidate and their ideas; that is, to improve their reputation. That only works at all if you’re deeply in tune with which of our ideas are political winners, and which are less so. It only works if you’re sensitive to what the media will say. If selectively highlighting our most popular causes seems disingenuous, manipulative, or self-defeating to an impression of integrity, I hear you—but that’s hardly a case FOR political advertising. To support what SBF’s doing in the first place starts by accepting that, at least to some extent, framing EA in a way the mainstream can get behind instrumentally overlaps with “doing things because we think they’re right.”
If you accept that reputation matters, why is optimizing for an impression of greater integrity better than optimizing for an impression of greater altruism? In both cases, we’re just trying to anticipate and strategically preempt a misconception people may have about our true motivations. It just boils down to which misconception you think is empirically more common or dangerous.
My second and broader worry is that EA may be entering the most dangerous reputational period of its existence to date. I’m planning a standalone post on this soon, so I won’t elaborate too much on why I think this here. But the surge of recent posts you mention suggests I’m not alone; and if we’re right, high-level PR mindfulness could be more important now than ever before. EA’s reputation is important for long-term impact, especially if you think (as SBF appears to) that some of the most important X-risk reductions will have to come from within democratic governments.
The OP focused on PR/reputation, which is what I reacted to.
I think there’s a difference between creating a reputation for integrity by actually behaving with integrity, and creating a reputation of being helpful by doing something that you actually don’t think is the most helpful thing that you can do.
If you are a consequentialist, then incorporating the consequences of reputation into your cost-benefit assessment is “actually behaving with integrity.” Why is it more honest—or even perceived as more honest—for SBF to exempt reputational consequences from what he thinks is most helpful?
Insofar as SBF’s reputation and EA’s reputation are linked, I agree with you (and disagree with OP) that it could be seen as cynical and hypocritical for SBF to suddenly focus on American beneficiaries in particular. These have never otherwise been EA priorities, so he would be transparently buying popularity. But I don’t think funding GiveWell’s short-term causes—nor even funding them more than you otherwise would for reputational reasons—is equally hypocritical in a way that suggests a lack of integrity. These are still among the most helpful things our community has identified. They are heavily funded by OpenPhilanthropy and by a huge portion of self-identified EAs, even apart from their reputational benefits. Many, both inside and outside the movement, see malaria bednets as the quintessential EA intervention. Nobody outside the movement would see that as a betrayal of EA principles.
Insofar as EA and SBF’s reputations are severable, perhaps it doesn’t matter what’s quintessentially EA, because “EA principles” are broader than SBF’s personal priorities. But in that case, because SBF’s personal priorities incline him towards political activism on longtermism, they should also incline him towards reputation management. Caring about things with instrumental value to protecting the future should not be seen as a dishonest deviation from longtermist beliefs, because it isn’t!
In another context, doing broadly popular and helpful things you “actually don’t think are the most helpful” might just be called hedging against moral uncertainty. Responsiveness to social pressure on altruists’ moral priorities is a humble admission that our niche and esoteric movement may have blind spots. It’s also, again, what representative politics are all about. If we want to literally help govern the country, we must be inclusive. We must convey that we are not here to evangelize to the ignorant masses, but are self-aware enough to incorporate their values. So if there’s a broad bipartisan belief that the very rich have obligations to the poor, SBF may have to validate that if he wants to be seen as altruistic elsewhere.
(I’m in a rush, so apologies if the above rambles).
Thank you, this is helpful. I do agree with you that there is a difference between supporting GiveWell-recommended charities and supporting American beneficiaries. More generally, my argument wasn’t directly about what donations Sam Bankman-Fried or other effective altruists should make, but rather about what arguments are brought to bear on that issue. Insofar as an analysis of direct impact suggests that certain charities should be funded, I obviously have no objection to that. My comment rather concerned the fact that the OP, in my view, put too much emphasis on reputational considerations relative to direct impact. (And I think this has been a broader pattern on the forum lately, which is part of the reason I thought it was worth pointing out.)
I didn’t focus on it in this post, but I genuinely think that the most helpful thing to do involves showing proficiency in achieving near-term goals, as that both allows us to troubleshoot potential practical issues, and allows outsiders to evaluate our track record. Part of showing integrity is showing transparency (assuming that we want outside support), and working on neartermist causes allows us to more easily do that.