Does being principled produce the same choice outcomes as being a long-term-consequentialist ?
Leadership circles[1] emphasize putting principles first. Utilitarianism rejects this approach: it focuses on maximizing outcomes, with little normative attention paid to the process (or, as the quip goes: the ends justify the means). This (apparent) distinction pits EA against conventional wisdom and, speaking from my experience as a group organizer,[2] is a turn-off.
However, this dichotomy seems false to me. I can easily imagine a conflict between a myopic utilitarian and a deontologist (e.g. the first might rig the lottery to send more money to charity).[3] I have more trouble imagining a conflict between a provident utilitarian and a principles-first person (e.g. cheating may help in the short term, but in the long-term, I may be barred from playing the game).[4]
Even if principles sometimes butt heads (e.g. being kind vs. being honest), so can different choice outcomes (e.g. minimizing animal suffering vs. maximizing human flourishing). Both these differences are resolved by changing the question’s parameters or definitions:[5] being dishonest is an unkindness; we need to take both sufferings into account.
All in all, it seems like both approaches face the same internal problems, the same resolutions, and could produce the same answer set. If this turns out to be true, there are a few possible consequences:
High confidence (>85%): With enough reflection, EA might develop ‘EA principles’ that are not focused on consequences but fundamentally aligned with EA.
Medium confidence (~55%): If EA develops these principles, EA can advertise them to current and prospective members, potentially attracting demographics that were fundamentally opposed to utilitarianism.
Low confidence (~30%): If EAs adopt these principles, they may shift their primary focus to processes (‘doing things right’), and move outcomes to secondary focuses. It adopts the motto: if you do things the right way, the right things will come.[6]
I’m thinking of Stephen Covey’s works “7 Habits of Highly Effective People” (1989) and “Principle-Centered Leadership” (1992). If these leadership models are outdated, please correct me.
When tabling for a new EA group, mentioning utilitarianism cast a shadow on a few (~40%) conversations. When I explained how we choose between lives we save every day, people seemed more empathetic, but it felt like a harder sell than it had to be.
I would love for someone to do proper math to see if this expected value works out. Quick maths are as follows (making assumptions along the way). Assume the lottery is 100M$ with a 80% chance of getting caught, and otherwise, you make 200G per year, and you’d get 10 years in prison for rigging. EV of lottery rigging = winning profits + losing costs = .2*$100M + .8*(-$200G/yr*10 yr) = 18.4M.
Does being principled produce the same choice outcomes as being a long-term-consequentialist ?
Leadership circles[1] emphasize putting principles first. Utilitarianism rejects this approach: it focuses on maximizing outcomes, with little normative attention paid to the process (or, as the quip goes: the ends justify the means). This (apparent) distinction pits EA against conventional wisdom and, speaking from my experience as a group organizer,[2] is a turn-off.
However, this dichotomy seems false to me. I can easily imagine a conflict between a myopic utilitarian and a deontologist (e.g. the first might rig the lottery to send more money to charity).[3] I have more trouble imagining a conflict between a provident utilitarian and a principles-first person (e.g. cheating may help in the short term, but in the long-term, I may be barred from playing the game).[4]
Even if principles sometimes butt heads (e.g. being kind vs. being honest), so can different choice outcomes (e.g. minimizing animal suffering vs. maximizing human flourishing). Both these differences are resolved by changing the question’s parameters or definitions:[5] being dishonest is an unkindness; we need to take both sufferings into account.
All in all, it seems like both approaches face the same internal problems, the same resolutions, and could produce the same answer set. If this turns out to be true, there are a few possible consequences:
High confidence (>85%): With enough reflection, EA might develop ‘EA principles’ that are not focused on consequences but fundamentally aligned with EA.
Medium confidence (~55%): If EA develops these principles, EA can advertise them to current and prospective members, potentially attracting demographics that were fundamentally opposed to utilitarianism.
Low confidence (~30%): If EAs adopt these principles, they may shift their primary focus to processes (‘doing things right’), and move outcomes to secondary focuses. It adopts the motto: if you do things the right way, the right things will come.[6]
I’m thinking of Stephen Covey’s works “7 Habits of Highly Effective People” (1989) and “Principle-Centered Leadership” (1992). If these leadership models are outdated, please correct me.
When tabling for a new EA group, mentioning utilitarianism cast a shadow on a few (~40%) conversations. When I explained how we choose between lives we save every day, people seemed more empathetic, but it felt like a harder sell than it had to be.
I would love for someone to do proper math to see if this expected value works out. Quick maths are as follows (making assumptions along the way). Assume the lottery is 100M$ with a 80% chance of getting caught, and otherwise, you make 200G per year, and you’d get 10 years in prison for rigging. EV of lottery rigging = winning profits + losing costs = .2*$100M + .8*(-$200G/yr*10 yr) = 18.4M.
I’m assuming that we live in a society that doesn’t value cheating...
This strategy is Captain Kirk’s when solving the Kobayashi Maru.
Its modus tollens comes to the same conclusion as utilitarianism: if you have the wrong consequences, you must have had the wrong processes.
The most important principle is to maximize long-run utility. All else follows.