I have very mixed feelings about Sarah’s post; the title seems inaccurate to me, and I’m not sure about how the quotes were interpreted, but it’s raised some interesting and useful-seeming discussion. Two brief points:
I understand what causes people to write comments like “lying seems bad but maybe it’s the best thing to do in some cases”, but I don’t think those comments usually make useful points (they typically seem pedantic at best and edgy at worst), and I hope people aren’t actually guided by considerations like those. Most EAs I work with, AFAICT, strive to be honest about their work and believe that this is the best policy even when there are prima facie reasons to be dishonest. Maybe it’s worth articulating some kind of “community-utilitarian” norms, probably drawing on rule utilitarianism, to explain why I think honesty is the best policy?
I think the discussion of what “pledge” means to different people is interesting; a friend pointed out to me that blurring the meaning of “pledge” into something softer than an absolute commitment could hurt my ability to make absolute commitments in the future, and I’m now considering ways to be more articulate about the strength of different commitment-like statements I make. Maybe it’s worth picking apart and naming some different concepts, like game-theoretic cooperation commitments, game-theoretic precommitments (e.g. virtues adopted before a series of games is entered), and self-motivating public statements (where nobody else’s decisions lose value if I later reverse my statement, but I want to participate in a social support structure for shared values)?
I have very mixed feelings about Sarah’s post; the title seems inaccurate to me, and I’m not sure about how the quotes were interpreted, but it’s raised some interesting and useful-seeming discussion. Two brief points:
I understand what causes people to write comments like “lying seems bad but maybe it’s the best thing to do in some cases”, but I don’t think those comments usually make useful points (they typically seem pedantic at best and edgy at worst), and I hope people aren’t actually guided by considerations like those. Most EAs I work with, AFAICT, strive to be honest about their work and believe that this is the best policy even when there are prima facie reasons to be dishonest. Maybe it’s worth articulating some kind of “community-utilitarian” norms, probably drawing on rule utilitarianism, to explain why I think honesty is the best policy?
I think the discussion of what “pledge” means to different people is interesting; a friend pointed out to me that blurring the meaning of “pledge” into something softer than an absolute commitment could hurt my ability to make absolute commitments in the future, and I’m now considering ways to be more articulate about the strength of different commitment-like statements I make. Maybe it’s worth picking apart and naming some different concepts, like game-theoretic cooperation commitments, game-theoretic precommitments (e.g. virtues adopted before a series of games is entered), and self-motivating public statements (where nobody else’s decisions lose value if I later reverse my statement, but I want to participate in a social support structure for shared values)?