Sarah’s post highlights some of the essential tensions at the heart of Effective Altruism.
Do we care about “doing the most good that we can” or “being as transparent and honest as we can”? These are two different value sets. They will sometimes overlap, and in other cases will not.
And please don’t say that “we do the most good that we can by being as transparent and honest as we can” or that “being as transparent and honest as we can” is best in the long term. Just don’t. You’re simply lying to yourself and to everyone else if you say that. If you can’t imagine a scenario where “doing the most good that we can” or “being as transparent and honest as we can” are opposed, you’ve just suffered from a failure mode by flinching away from the truth.
So when push comes to shove, which one do we prioritize? When we have to throw the switch and have the trolley crush either “doing the most good” or “being as transparent and honest as we can,” which do we choose?
For a toy example, say you are talking to your billionaire uncle on his deathbed and trying to convince him to leave money to AMF instead of his current favorite charity, the local art museum. You know he would respond better if you exaggerate the impact of AMF. Would you do so, whether lying by omission or in any other way, in order to get much more money for AMF, given that no one else would find out about this situation? What about if you know that other family members are standing in the wings and ready to use all sorts of lies to advocate for their favorite charities?
If you do not lie, that’s fine, but don’t pretend that you care about doing the most good, please. Just don’t. You care about being as transparent and honest as possible over doing the most good.
If you do lie to your uncle, then you do care about doing the most good. However, you should consider at what price point you will not lie—at this point, we’re just haggling.
The people quoted in Sarah’s post all highlight how doing the most good sometimes involves not being as transparent and honest as we can (including myself). Different people have different price points, that’s all. We’re all willing to bite the bullet and sometimes send that trolley over transparency and honesty, whether questioning the value of public criticism such as Ben or appealing to emotions such as Rob or using intuition as evidence such as Jacy, for the sake of what we believe is the most good.
As a movement, EA has a big problem with believing that ends never justify the means. Yes, sometimes ends do justify the means—at least if we care about doing the most good. We can debate whether we are mistaken about the ends not justifying the means, but using insufficient means to accomplish the ends is just as bad as using excessive means to get to the ends. If we are truly serious about doing the most good as possible, we should let our end goal be the North Star, and work backward from there, as opposed to hobbling ourselves by preconceived notions of “intellectual rigor” at the cost of doing the most good.
“”The end does not justify the means” is just consequentialist reasoning at one meta-level up. If a human starts thinking on the object level that the end justifies the means, this has awful consequences given our untrustworthy brains; therefore a human shouldn’t think this way. But it is all still ultimately consequentialism. It’s just reflective consequentialism, for beings who know that their moment-by-moment decisions are made by untrusted hardware.”
This doesn’t mean I think there’s never a circumstance where you need to breach a deontological rule; I agree with EY when they say “I think the universe is sufficiently unkind that we can justly be forced to consider situations of this sort.”. This is the reason under Sarah’s definition of absolutely binding promises, I would simply never make such a promise- I might say that I would try my best and to the best of my knowledge there was nothing that would prevent me from doing a thing, or something like that- but I think the universe can be amazingly inconvenient and don’t want to be a pretender at principles I would not actually in extremis live up to.
The theory I tend to operate under I think of as “biased naive consequentialism”, where I do naive consequentialism- estimating out as far as I can see easily- then introduce heavy bias against things which are likely to have untracked bad consequences, e.g. lying, theft. (I kind of am amused by how all the adjectives in the description are negative ones.). But under a sufficiently massive difference, sure, I’d lie to an axe murderer. This means there is a “price”, somewhere. This is probably most similar to the concept of “way utilitarianism”, which I think is way better than either act or rule utilitarianism, and is discussed as a sort of steelman of Mohist ideas (https://plato.stanford.edu/entries/mohism/).
One of the things I take from the thinking around the non-central fallacy aka the worst argument in the world (http://lesswrong.com/lw/e95/the_noncentral_fallacy_the_worst_argument_in_the/) is that one should smoothly reduce the strength of such biases for examples which are very atypical of the circumstances the bias was intended for, so as to not have weird sharp edges near category borders.
All this is to say that in weird extreme edge cases, under conditions of perfect knowledge, I think what people do is not important. It’s okay to have a price. But in the central cases, in actual life, I think they should have either a very strong bias against deception and for viewing deceptive behaviour poorly, or an outright deontological prohibition if they can’t reliably maintain that.
If I was to say one thing I think is a big problem, it’s that in practice some people’s price seems to be only able to be infinite or zero (or even negative- a lot of people seem to get tempted by cool “ends justify means” arguments which don’t even look prima facie like they’d actually have positive utility. I mean, trading discourse and growth for money in a nascent movement is, um, even naive utilitarianism can track far enough out to see the problems there, you have to have an intuitive preference for deception to favour it).
I disagree with you in that I think infinite works fine almost always, so it wouldn’t be a big problem if everyone had that- I’d be very happy if all the people who had their price to cheat at around zero moved it to infinite. But I agree with you in that infinite isn’t actually the correct answer for an ideal unbiased reasoner, just not that this should affect how humans behave while under the normal circumstances that are the work of the EA movement.
The alarming part for me is that I think in general these debates do, because people erroneously jump from “a hypothetical scenario with a hypothetical perfect reasoner would not behave deontologically” to sketchiness in practice.
Let me first clarify that I see the goal of doing the most good as my end goal, and YMMV—no judgment on anyone who cares more about truth than doing good. This is just my value set.
Within that value set, using “insufficient” means to get to EA ends is just as bad as using “excessive” means. In this case, being “too honest” is just as bad as “not being honest enough.” The correct course of actions is to correctly calibrate one’s level of honesty to maximize for positive long-term impact for doing the most good.
Now, the above refers to the ideal-type scenario. IRL, different people are differently calibrated. Some tend to be too oriented toward exaggerating, some too oriented to being humble and understating the case, and in either case, it’s a mistake. So one should learn where one’s bias is, and push against that bias.
Sarah’s post highlights some of the essential tensions at the heart of Effective Altruism.
Do we care about “doing the most good that we can” or “being as transparent and honest as we can”? These are two different value sets. They will sometimes overlap, and in other cases will not.
And please don’t say that “we do the most good that we can by being as transparent and honest as we can” or that “being as transparent and honest as we can” is best in the long term. Just don’t. You’re simply lying to yourself and to everyone else if you say that. If you can’t imagine a scenario where “doing the most good that we can” or “being as transparent and honest as we can” are opposed, you’ve just suffered from a failure mode by flinching away from the truth.
So when push comes to shove, which one do we prioritize? When we have to throw the switch and have the trolley crush either “doing the most good” or “being as transparent and honest as we can,” which do we choose?
For a toy example, say you are talking to your billionaire uncle on his deathbed and trying to convince him to leave money to AMF instead of his current favorite charity, the local art museum. You know he would respond better if you exaggerate the impact of AMF. Would you do so, whether lying by omission or in any other way, in order to get much more money for AMF, given that no one else would find out about this situation? What about if you know that other family members are standing in the wings and ready to use all sorts of lies to advocate for their favorite charities?
If you do not lie, that’s fine, but don’t pretend that you care about doing the most good, please. Just don’t. You care about being as transparent and honest as possible over doing the most good.
If you do lie to your uncle, then you do care about doing the most good. However, you should consider at what price point you will not lie—at this point, we’re just haggling.
The people quoted in Sarah’s post all highlight how doing the most good sometimes involves not being as transparent and honest as we can (including myself). Different people have different price points, that’s all. We’re all willing to bite the bullet and sometimes send that trolley over transparency and honesty, whether questioning the value of public criticism such as Ben or appealing to emotions such as Rob or using intuition as evidence such as Jacy, for the sake of what we believe is the most good.
As a movement, EA has a big problem with believing that ends never justify the means. Yes, sometimes ends do justify the means—at least if we care about doing the most good. We can debate whether we are mistaken about the ends not justifying the means, but using insufficient means to accomplish the ends is just as bad as using excessive means to get to the ends. If we are truly serious about doing the most good as possible, we should let our end goal be the North Star, and work backward from there, as opposed to hobbling ourselves by preconceived notions of “intellectual rigor” at the cost of doing the most good.
I at least would say that I care about doing the most good that I can, but am also mindful of the fact that I run on corrupted hardware, which makes ends justifying means arguments unreliable, per EY’s classic argument (http://lesswrong.com/lw/uv/ends_dont_justify_means_among_humans/)
“”The end does not justify the means” is just consequentialist reasoning at one meta-level up. If a human starts thinking on the object level that the end justifies the means, this has awful consequences given our untrustworthy brains; therefore a human shouldn’t think this way. But it is all still ultimately consequentialism. It’s just reflective consequentialism, for beings who know that their moment-by-moment decisions are made by untrusted hardware.”
This doesn’t mean I think there’s never a circumstance where you need to breach a deontological rule; I agree with EY when they say “I think the universe is sufficiently unkind that we can justly be forced to consider situations of this sort.”. This is the reason under Sarah’s definition of absolutely binding promises, I would simply never make such a promise- I might say that I would try my best and to the best of my knowledge there was nothing that would prevent me from doing a thing, or something like that- but I think the universe can be amazingly inconvenient and don’t want to be a pretender at principles I would not actually in extremis live up to.
The theory I tend to operate under I think of as “biased naive consequentialism”, where I do naive consequentialism- estimating out as far as I can see easily- then introduce heavy bias against things which are likely to have untracked bad consequences, e.g. lying, theft. (I kind of am amused by how all the adjectives in the description are negative ones.). But under a sufficiently massive difference, sure, I’d lie to an axe murderer. This means there is a “price”, somewhere. This is probably most similar to the concept of “way utilitarianism”, which I think is way better than either act or rule utilitarianism, and is discussed as a sort of steelman of Mohist ideas (https://plato.stanford.edu/entries/mohism/).
One of the things I take from the thinking around the non-central fallacy aka the worst argument in the world (http://lesswrong.com/lw/e95/the_noncentral_fallacy_the_worst_argument_in_the/) is that one should smoothly reduce the strength of such biases for examples which are very atypical of the circumstances the bias was intended for, so as to not have weird sharp edges near category borders.
All this is to say that in weird extreme edge cases, under conditions of perfect knowledge, I think what people do is not important. It’s okay to have a price. But in the central cases, in actual life, I think they should have either a very strong bias against deception and for viewing deceptive behaviour poorly, or an outright deontological prohibition if they can’t reliably maintain that.
If I was to say one thing I think is a big problem, it’s that in practice some people’s price seems to be only able to be infinite or zero (or even negative- a lot of people seem to get tempted by cool “ends justify means” arguments which don’t even look prima facie like they’d actually have positive utility. I mean, trading discourse and growth for money in a nascent movement is, um, even naive utilitarianism can track far enough out to see the problems there, you have to have an intuitive preference for deception to favour it).
I disagree with you in that I think infinite works fine almost always, so it wouldn’t be a big problem if everyone had that- I’d be very happy if all the people who had their price to cheat at around zero moved it to infinite. But I agree with you in that infinite isn’t actually the correct answer for an ideal unbiased reasoner, just not that this should affect how humans behave while under the normal circumstances that are the work of the EA movement.
The alarming part for me is that I think in general these debates do, because people erroneously jump from “a hypothetical scenario with a hypothetical perfect reasoner would not behave deontologically” to sketchiness in practice.
Let me first clarify that I see the goal of doing the most good as my end goal, and YMMV—no judgment on anyone who cares more about truth than doing good. This is just my value set.
Within that value set, using “insufficient” means to get to EA ends is just as bad as using “excessive” means. In this case, being “too honest” is just as bad as “not being honest enough.” The correct course of actions is to correctly calibrate one’s level of honesty to maximize for positive long-term impact for doing the most good.
Now, the above refers to the ideal-type scenario. IRL, different people are differently calibrated. Some tend to be too oriented toward exaggerating, some too oriented to being humble and understating the case, and in either case, it’s a mistake. So one should learn where one’s bias is, and push against that bias.