A general policy I’ve adapted recently as I’ve gotten more explicit* power/authority than I’m used to is to generally “operate with slightly to moderately more integrity than I project explicit reasoning or cost-benefits analysis would suggest.”
This is primarily for epistemics and community epistemics reasons, but secondarily for optics reasons.
I think this almost certainly does risk leaving value on the table, but on balance it is a better balance than potential alternatives:
Just following explicit reasoning likely leads to systematic biases “shading” the upsides higher and the downsides lower, and I think this is an explicit epistemics bias that can and should be corrected for.
there is also a slightly adversarial dynamics on the optics framing—moves that seem like a normal/correct amount of integrity to me may adversarially be read as lower integrity to others.
Projections/forecasts of reasoning (which is necessary because explicit reasoning is often too slow) may additionally be biased on top of the explicit reasoning (I have some pointers here)
Always “behaving with maximal integrity” probably leaves too much value on the table, unless you define integrity in a pretty narrow/circumscribed way (for example, avoiding COIs completely in EA grantmaking is very costly).
There’s also a “clean hands” issue, where it seems better to avoid doing potentially problematic stuff at all times, but ultimately it’s a dereliction of duty to sacrifice (too much) impact to make myself feel better.
Having very explicit group-level policies and sticking to the letter of them seems appealing, but in practice a lot of EA work and projects are a) often too preparadigmatic for very ironclad policies to make sense and b) usually land in coordination points that are more lenient than I personally think is ideal for my own work.
Here are some nontrivial examples of downstream actions or subpolicies of this policy that I currently apply:
In general, I have a higher bar for friendships for grantees or employees than baseline.
For example, Manifold Markets had a pretty cool offsite in Mexico that I think I’d seriously consider going to if I didn’t have this pretty major CoI.
In the vast majority of situations where I’d be uncomfortable doing X with a female employee/grantee, I’d also usually refuse to do this with a male employee/grantee, even though on the face of it, the latter is okay if we’re both heterosexual.
e.g. share a hot tub, particularly when others aren’t around
I try to record relatively minor CoIs (e.g. unclose friendships) diligently, at minorly annoying time costs.
I follow the GWWC pledge and take it fairly seriously even though I think an object-level evaluation of the time costs is that investigating personal donations is probably not competitive with other activities I could do (e.g. grant evaluations, management, setting research strategy, or personal development).
I have a moderately higher bar for being willing to spend money on my life in ways that makes my life more enjoyable as well as save time, compared to a) being willing to spend money on other people similar to me in ways that helps them save time, or b) being willing to spend money to save my time in ways that don’t make my life more enjoyable (e.g. buying high-speed internet on airplanes).
*A factor I did not fully account for in the past (and may still be off, not sure) is that I may have had more “soft power” than I previously bargained for. E.g. I’m starting to see accounts of people take pretty large career or life decisions based on my past writings or conversations, and this is something I didn’t fully account for (as opposed to primarily thinking about my internet writings mostly cerebrally, as just contributing to a marketplace of ideas).
A general policy I’ve adapted recently as I’ve gotten more explicit* power/authority than I’m used to is to generally “operate with slightly to moderately more integrity than I project explicit reasoning or cost-benefits analysis would suggest.”
This is primarily for epistemics and community epistemics reasons, but secondarily for optics reasons.
I think this almost certainly does risk leaving value on the table, but on balance it is a better balance than potential alternatives:
Just following explicit reasoning likely leads to systematic biases “shading” the upsides higher and the downsides lower, and I think this is an explicit epistemics bias that can and should be corrected for.
there is also a slightly adversarial dynamics on the optics framing—moves that seem like a normal/correct amount of integrity to me may adversarially be read as lower integrity to others.
Projections/forecasts of reasoning (which is necessary because explicit reasoning is often too slow) may additionally be biased on top of the explicit reasoning (I have some pointers here)
Always “behaving with maximal integrity” probably leaves too much value on the table, unless you define integrity in a pretty narrow/circumscribed way (for example, avoiding COIs completely in EA grantmaking is very costly).
There’s also a “clean hands” issue, where it seems better to avoid doing potentially problematic stuff at all times, but ultimately it’s a dereliction of duty to sacrifice (too much) impact to make myself feel better.
Having very explicit group-level policies and sticking to the letter of them seems appealing, but in practice a lot of EA work and projects are a) often too preparadigmatic for very ironclad policies to make sense and b) usually land in coordination points that are more lenient than I personally think is ideal for my own work.
Here are some nontrivial examples of downstream actions or subpolicies of this policy that I currently apply:
In general, I have a higher bar for friendships for grantees or employees than baseline.
For example, Manifold Markets had a pretty cool offsite in Mexico that I think I’d seriously consider going to if I didn’t have this pretty major CoI.
In the vast majority of situations where I’d be uncomfortable doing X with a female employee/grantee, I’d also usually refuse to do this with a male employee/grantee, even though on the face of it, the latter is okay if we’re both heterosexual.
e.g. share a hot tub, particularly when others aren’t around
I try to record relatively minor CoIs (e.g. unclose friendships) diligently, at minorly annoying time costs.
I follow the GWWC pledge and take it fairly seriously even though I think an object-level evaluation of the time costs is that investigating personal donations is probably not competitive with other activities I could do (e.g. grant evaluations, management, setting research strategy, or personal development).
I have a moderately higher bar for being willing to spend money on my life in ways that makes my life more enjoyable as well as save time, compared to a) being willing to spend money on other people similar to me in ways that helps them save time, or b) being willing to spend money to save my time in ways that don’t make my life more enjoyable (e.g. buying high-speed internet on airplanes).
*A factor I did not fully account for in the past (and may still be off, not sure) is that I may have had more “soft power” than I previously bargained for. E.g. I’m starting to see accounts of people take pretty large career or life decisions based on my past writings or conversations, and this is something I didn’t fully account for (as opposed to primarily thinking about my internet writings mostly cerebrally, as just contributing to a marketplace of ideas).