WRT humility, I think it’s important to distinguish between public attitudes, in-group attitudes, and private attitudes. Specifically, when it comes to public humility, while people in general probably tend to express overconfidence from a greater-good perspective (signaling 101, lemons problem, etc) I’m not sure that EAs do, especially to an extent that hurts EA goals. There are costs to public humility which don’t appear in other realms, specifically people attach less belief to what you have to say. I have seen many conversations where “I’m not an expert in this, but...” is met with extreme hostility and dismissal, whereas overconfident yet faulty beliefs would be better accepted.
It’s reasonable to suppose that ordinary norms have evolved to optimize the level of public humility which most strongly bolsters the speaker’s reputation in a competitive marketplace of ideas. So if we are more publicly humble than this, then we should expect to have less of a reputation. This can be acceptable if you have other goals besides supporting your own reputation and ideas, but it should be kept in mind. It’s one reason to have different attitudes and goals for outward-facing discussions than you do for inward-facing discussions.
For example, Holden Karnofsky and Elie Hassenfeld, the two founding staff of GiveWell, promoted GiveWell on several blogs and in email forums either by disguising their identities and not revealing their associations with GiveWell. This practice is known as astroturfing.
That’s not just a violation of extraordinary honesty. That’s straight-up deception. It doesn’t have much to do with the claim that we should be extraordinarily honest. Ordinary common sense morality already says that astroturfing is bad.
This means that communities become either predominantly honest or predominantly dishonest, since individuals joining the community adapt to the level of honesty.
Are you suggesting that we should observe a bimodal distribution of honesty within communities? I’m not sure if that matches my observations.
Perceptions of dishonesty may encourage people to be dishonest but that is subtly different from dishonesty.
Honesty can also be very helpful when an agent turns out to have been wrong. If I promote a cause I’m very confident in, and lie, I may persuade more people to join my cause. But later, if it turns out that I am wrong in my private reasoning, then the magnitude of my mistake is multiplied by the number of people I deceived. If, on the other hand, I am open with my honest reasoning, other people are at liberty to discover my mistake and prevent a bad outcome.
Again, lying is a violation of ordinary honesty, not extraordinary honesty. I’m still at a loss to see what demands us to be extraordinarily honest. It might help to be more clear about what you mean by honesty above-and-beyond what ordinary morality entails.
But often this criticism helps you to work out how to improve, and this benefit outweighs costs to your reputation.
I don’t know if this has been the case.
I think that historically most criticism leveled at EA has not really helped with any of our decisions. E.g., when people complained that Open Phil had too many associations with its grant recipients. Does that help them make better grants? No, it just took the form of offense and complaints, and either you think it’s a problem or you don’t. It’s easy to notice that Open Phil has lots of associations with its grant recipients, and whether you think that is a problem or not is your own judgement to make; Open Phil clearly knew about these associations and made its own judgements on the matter. So a chorus of outsiders simply saying “Open Phil has too many associations with its grant recipients!” doesn’t add anything substantial to the issue. If anything it pollutes it with tribalism, which makes it harder to make rational decisions.
When we can, we should assist others from outside the community in this way. Besides the benefits this gives to them, helping others boosts the reputation of us individually and as a community.
Reputation boosts to oneself and our community are already “accounted for”, though. The basic logic of reciprocation and social status was part of the original evolution of morality in the first place, and are strong motivations for the people all over the world who already follow common sense morality to varying degrees. So why do you think that ordinary common-sense morality would underestimate this effect? To borrow the ideas from Yudkowsky recently posted here, you’re alleging that this system is inadequate, but the incentives of the system under consideration are fundamentally similar to the things which we want to pursue. So you’re really alleging that the system is inefficient, and that there is a free lunch available for anyone who decides to be extraordinarily helpful. I take that to be an extraordinary claim. Why have all the politicians, businessmen, activists, political parties, and social movements of the world failed to demonstrate extraordinary niceness?
The obvious caveat to this is that we should not help organisations and individuals which cause harm.
I don’t see why this follows.
Moral trade is not premised upon the assumption that the people you’re helping don’t do harm. It makes it more difficult to pull off a moral trade, but it’s not a general principle, it’s just another thing to factor into your decision.
Maybe you mean that we will realize special losses in reputation if we help actors which are perceived as causing harm. But perception of harm is different from harm itself, and we should be sensitive to what broader society thinks. For instance, if there is a Student Travel Club at my university, helping them probably harms the world due to the substantial carbon dioxide emissions caused by cruise ships and aircraft and the loss in productivity that is realized when students focus on travel rather than academia and work. But the broader public is not going to think that EA is doing something wrong if the EA club does something in partnership with the Student Travel Club.
Of course, some people will accept your assistance but not reciprocate. They will receive all the benefits of cooperation while not incurring any costs on themselves. This is known as the free-rider problem. Free-riding undermines the norm of cooperation, which makes it less likely that help is given when needed.
Right, and here again, I would expect an efficient market of common-sense morality, in a world of ultimately selfish actors, to develop to the point where common sense morality entails that we assist others to the point at which the reciprocation ends and the free-riding begins. I don’t see a reason to expect that common sense morality would underestimate the optimal amount of assistance which we should give to other people. (You may rightfully point out that humans aren’t ultimately selfish, but that just implies that common sense morality may be too nice, depending on what one’s goals are.)
Social groups and individuals throughout human history have been striving to identify the norms which lead to maximal satisfaction of their goals. The ones who succeed in this endeavor obtain the power and influence to pass on those norms to future generations. It’s unlikely that the social norms which coalesce at the end of this process would be systematically flawed if measured by the very same criteria of goal-satisfaction which have motivated these actors from the beginning.
WRT humility, I think it’s important to distinguish between public attitudes, in-group attitudes, and private attitudes. Specifically, when it comes to public humility, while people in general probably tend to express overconfidence from a greater-good perspective (signaling 101, lemons problem, etc) I’m not sure that EAs do, especially to an extent that hurts EA goals. There are costs to public humility which don’t appear in other realms, specifically people attach less belief to what you have to say. I have seen many conversations where “I’m not an expert in this, but...” is met with extreme hostility and dismissal, whereas overconfident yet faulty beliefs would be better accepted.
It’s reasonable to suppose that ordinary norms have evolved to optimize the level of public humility which most strongly bolsters the speaker’s reputation in a competitive marketplace of ideas. So if we are more publicly humble than this, then we should expect to have less of a reputation. This can be acceptable if you have other goals besides supporting your own reputation and ideas, but it should be kept in mind. It’s one reason to have different attitudes and goals for outward-facing discussions than you do for inward-facing discussions.
That’s not just a violation of extraordinary honesty. That’s straight-up deception. It doesn’t have much to do with the claim that we should be extraordinarily honest. Ordinary common sense morality already says that astroturfing is bad.
Are you suggesting that we should observe a bimodal distribution of honesty within communities? I’m not sure if that matches my observations.
Perceptions of dishonesty may encourage people to be dishonest but that is subtly different from dishonesty.
Again, lying is a violation of ordinary honesty, not extraordinary honesty. I’m still at a loss to see what demands us to be extraordinarily honest. It might help to be more clear about what you mean by honesty above-and-beyond what ordinary morality entails.
I don’t know if this has been the case.
I think that historically most criticism leveled at EA has not really helped with any of our decisions. E.g., when people complained that Open Phil had too many associations with its grant recipients. Does that help them make better grants? No, it just took the form of offense and complaints, and either you think it’s a problem or you don’t. It’s easy to notice that Open Phil has lots of associations with its grant recipients, and whether you think that is a problem or not is your own judgement to make; Open Phil clearly knew about these associations and made its own judgements on the matter. So a chorus of outsiders simply saying “Open Phil has too many associations with its grant recipients!” doesn’t add anything substantial to the issue. If anything it pollutes it with tribalism, which makes it harder to make rational decisions.
Reputation boosts to oneself and our community are already “accounted for”, though. The basic logic of reciprocation and social status was part of the original evolution of morality in the first place, and are strong motivations for the people all over the world who already follow common sense morality to varying degrees. So why do you think that ordinary common-sense morality would underestimate this effect? To borrow the ideas from Yudkowsky recently posted here, you’re alleging that this system is inadequate, but the incentives of the system under consideration are fundamentally similar to the things which we want to pursue. So you’re really alleging that the system is inefficient, and that there is a free lunch available for anyone who decides to be extraordinarily helpful. I take that to be an extraordinary claim. Why have all the politicians, businessmen, activists, political parties, and social movements of the world failed to demonstrate extraordinary niceness?
I don’t see why this follows.
Moral trade is not premised upon the assumption that the people you’re helping don’t do harm. It makes it more difficult to pull off a moral trade, but it’s not a general principle, it’s just another thing to factor into your decision.
Maybe you mean that we will realize special losses in reputation if we help actors which are perceived as causing harm. But perception of harm is different from harm itself, and we should be sensitive to what broader society thinks. For instance, if there is a Student Travel Club at my university, helping them probably harms the world due to the substantial carbon dioxide emissions caused by cruise ships and aircraft and the loss in productivity that is realized when students focus on travel rather than academia and work. But the broader public is not going to think that EA is doing something wrong if the EA club does something in partnership with the Student Travel Club.
Right, and here again, I would expect an efficient market of common-sense morality, in a world of ultimately selfish actors, to develop to the point where common sense morality entails that we assist others to the point at which the reciprocation ends and the free-riding begins. I don’t see a reason to expect that common sense morality would underestimate the optimal amount of assistance which we should give to other people. (You may rightfully point out that humans aren’t ultimately selfish, but that just implies that common sense morality may be too nice, depending on what one’s goals are.)
Social groups and individuals throughout human history have been striving to identify the norms which lead to maximal satisfaction of their goals. The ones who succeed in this endeavor obtain the power and influence to pass on those norms to future generations. It’s unlikely that the social norms which coalesce at the end of this process would be systematically flawed if measured by the very same criteria of goal-satisfaction which have motivated these actors from the beginning.