In general I would agree that itâs better to do what is good rather than what looks good. However, when you are the face of a global movement, optics have a meaningful financial implication. Imagine if this bad press made 1 billionaire 0.1% less likely to get involved with EA. That calculation would dominate any potential efficiency savings from insourcing a service provider.
I used to think this and I increasingly donât. Doing good thing is what weâre all about. Doing good things even if it looks bad in the tabloid press is good publicity to the people who actually care about doing good, and theyâre more important to us than the rest.
I think an EA that was weirder and more unapologetic about doing its stuff attracts more of the right kind of people and can generally get on with things more than an EA that frantically tries to massage itâs optics to appeal to everyone.
I am having a hard time here and speckled throughout the rest of this post with people writing that we are doing the âgood thingâ and we should do that and not just what looks good with the âgood thingâ in question being buying a castle and not say, caring about wild animal suffering.
I guess Iâve gone off into the abstract argument about whether we should care about optics or not. I donât mean to assert that buying Wytham Abbey was a good thing to do, I just think that we should argue about whether it was a good thing to do, not whether it looks like a good thing to do.
Iâm arguing that deciding whether or not it is a good thing should include the PR impact (i.e. a weak consequentialist approach). I donât care if things look bad, unless that perception leads bad outcomes. In this case, I think the perception could lead to bad outcomes that dominate the good outcomes in the expected value calculation
I think this kind of reasoning is difficult to follow in practice, and likely to do more harm than good. Eg, I expect some billionaires are drawn to a movement that says fuck PR and actually tries to do whatâs importantâwhat if trying to account for PR has a 0.1% chance of putting off those billionaires? Etc.
At the very least, âdo what is actually good rather than just what looks goodâ seems like a valid philosophy to follow if trying to do good, even after accounting for opticsâtrying to account for optics can easily be misleading, paralysing, etc.
EA is all about uncertain EV calculationsâI donât see why we should exclude optics when calculating EV. We should just embrace the uncertainty and try our best.
The only part of EA that doesnât involve super uncertain EV calculations which can be misleading and paralysing is randomista development.
This is fair, and I donât want to argue that optics donât matter at all or that we shouldnât try to think about them.
My argument is more that actually properly accounting for optics in your EV calculations is really hard, and that most naive attempts to do so can easily do more harm than good. And that I think people can easily underestimate the costs of caring less about truth or effectiveness or integrity, and overestimate the costs of being legibly popular or safe from criticism. Generally, people have a strong desire to be popular and to fit in, and I think this can significantly bias thinking around optics! I particularly think this is the case with naive expected value calculations of the form âif thereâs even a 0.1% chance of bad outcome X we should not do this, because X would be super badâ. Because itâs easy to anchor on some particularly salient example of X, and miss out on a bunch of other tail risk considerations.
The âannoying people by showing that we care more about style than substanceâ was an example of a counter-veiling consideration that argues in the opposite direction and could also be super bad.
This argument is motivated by the same reasoning as the âdonât kill people to steal their organs, even if it seems like a really good idea at the time, and youâre confident no one will ever find outâ argument.
Thanks! Glad to hear it. This classic Yudkowsky post is a significant motivator. Key quote:
But if you are running on corrupted hardware, then the reflective observation that it seems like a righteous and altruistic act to seize power for yourselfâthis seeming may not be be much evidence for the proposition that seizing power is in fact the action that will most benefit the tribe.
By the power of naive realism, the corrupted hardware that you run on, and the corrupted seemings that it computes, will seem like the fabric of the very world itselfâsimply the way-things-are.
And so we have the bizarre-seeming rule: âFor the good of the tribe, do not cheat to seize power even when it would provide a net benefit to the tribe.â
In general, I agree with you (as I say in my first sentence), but
EVâs objectives are the promotion of EA, i.e. PR is itâs raisin dâetre.
in this case, the benefit seems like a rounding error (maybe you could argue it would save ~ÂŁ100k p.a.) compared to the PR potential.
Even if itâs hard to assess the PR impact (and I acknowledge it could go either way), itâs negligent not to consider it.
In general I would agree that itâs better to do what is good rather than what looks good. However, when you are the face of a global movement, optics have a meaningful financial implication. Imagine if this bad press made 1 billionaire 0.1% less likely to get involved with EA. That calculation would dominate any potential efficiency savings from insourcing a service provider.
I used to think this and I increasingly donât. Doing good thing is what weâre all about. Doing good things even if it looks bad in the tabloid press is good publicity to the people who actually care about doing good, and theyâre more important to us than the rest.
I think an EA that was weirder and more unapologetic about doing its stuff attracts more of the right kind of people and can generally get on with things more than an EA that frantically tries to massage itâs optics to appeal to everyone.
I am having a hard time here and speckled throughout the rest of this post with people writing that we are doing the âgood thingâ and we should do that and not just what looks good with the âgood thingâ in question being buying a castle and not say, caring about wild animal suffering.
I guess Iâve gone off into the abstract argument about whether we should care about optics or not. I donât mean to assert that buying Wytham Abbey was a good thing to do, I just think that we should argue about whether it was a good thing to do, not whether it looks like a good thing to do.
Iâm arguing that deciding whether or not it is a good thing should include the PR impact (i.e. a weak consequentialist approach). I donât care if things look bad, unless that perception leads bad outcomes. In this case, I think the perception could lead to bad outcomes that dominate the good outcomes in the expected value calculation
I very much agree with Michael here.
I think this kind of reasoning is difficult to follow in practice, and likely to do more harm than good. Eg, I expect some billionaires are drawn to a movement that says fuck PR and actually tries to do whatâs importantâwhat if trying to account for PR has a 0.1% chance of putting off those billionaires? Etc.
At the very least, âdo what is actually good rather than just what looks goodâ seems like a valid philosophy to follow if trying to do good, even after accounting for opticsâtrying to account for optics can easily be misleading, paralysing, etc.
EA is all about uncertain EV calculationsâI donât see why we should exclude optics when calculating EV. We should just embrace the uncertainty and try our best.
The only part of EA that doesnât involve super uncertain EV calculations which can be misleading and paralysing is randomista development.
This is fair, and I donât want to argue that optics donât matter at all or that we shouldnât try to think about them.
My argument is more that actually properly accounting for optics in your EV calculations is really hard, and that most naive attempts to do so can easily do more harm than good. And that I think people can easily underestimate the costs of caring less about truth or effectiveness or integrity, and overestimate the costs of being legibly popular or safe from criticism. Generally, people have a strong desire to be popular and to fit in, and I think this can significantly bias thinking around optics! I particularly think this is the case with naive expected value calculations of the form âif thereâs even a 0.1% chance of bad outcome X we should not do this, because X would be super badâ. Because itâs easy to anchor on some particularly salient example of X, and miss out on a bunch of other tail risk considerations.
The âannoying people by showing that we care more about style than substanceâ was an example of a counter-veiling consideration that argues in the opposite direction and could also be super bad.
This argument is motivated by the same reasoning as the âdonât kill people to steal their organs, even if it seems like a really good idea at the time, and youâre confident no one will ever find outâ argument.
Thanks, Neel. This is a very helpful comment. I now donât think our views are too far apart.
Thanks! Glad to hear it. This classic Yudkowsky post is a significant motivator. Key quote:
In general, I agree with you (as I say in my first sentence), but
EVâs objectives are the promotion of EA, i.e. PR is itâs raisin dâetre.
in this case, the benefit seems like a rounding error (maybe you could argue it would save ~ÂŁ100k p.a.) compared to the PR potential. Even if itâs hard to assess the PR impact (and I acknowledge it could go either way), itâs negligent not to consider it.