I totally agree. But even if we conservatively say that it’s a 50% chance that he was using act utilitarianism as his decision procedure, that’s enough to consider it compromised, because it could lead to bad consequencesmultiple billions of dollars of damages (edited).
There are also subtler issues: if you intend to be act utilitarian but aren’t and do harm, that’s still an argument against intending to use the decision procedure. And if someone says they’re act utilitarian but aren’t and does harm, that’s an argument against trusting people who say they’re act utilitarian.
Not trying to take this out on you, but I’m annoyed by how much all this advocacy of deontology all of sudden overlaps with covering our own asses. I don’t buy it as a massive update about morality or psychology from the events themselves but a massive update about optics.
Reposting from twitter: It’s a moderate update on the prevalence of naive utilitarians among EAs.
Expanded:
Classical problem with this debate on utilitarianism is the vocabulary used makes motte-and-bailey defense of utilitarianism too easy. 1. Someone points to a bunch problems with a act consequentialist decision procedure / cases where naive consequentialism tells you to do bad things 2. The default response is “but this is naive consequentialism, no one actually does that” 3. You may wonder that while people don’t advocate for or self-identify as naive utilitarians … they actually make the mistakes
The case provides some evidence that the problems can actually happen in practice in important enough situations to care. [*]
Also, you have the problem that sophisticated naive consequentialists could be tempted to lie to you about their morality (“no worries, you can trust me, I’m following the sensible deontic constraints!”). Personally, before the recent FTX happenings, I would be more of the opinion “nah, this sounds too much like an example from a philosophical paper, unlikely with typical human psychology ”. Now I take it as more real problem.
[*] What I’m actually worried about …
Effective altruism motivated thousands of people to move into highly leveraged domains, with large and potentially deadly consequences—powerful AI stuff, pandemics, epistemic tech. I think that if just 15% of them believe in some form of hardcore utilitarianism where you drop integrity constrains and trust your human brain ability to evaluate when to be constrained and when not, it’s … actually a problem?
I’d agree with this statement more if it acknowledged the extent to which most human minds have the kind of propositional separation between “morality” and “optics” that obtained financially between FTX and Alameda.
I don’t buy it as a massive update about morality or psychology from the events themselves but a massive update about optics.
This will be a relief if true. I am much more worried about people not having principles (or their principles guided by something other than morality) than people being overly concerned about optics. The latter is a tactical concern (albeit a big one) and hopefully fixable, the former is evidence that people in our movement is too conformist or otherwise too weak or too evil to confront moral catastrophes.
This strikes me as a bad play of “if there was even a chance”. Is there any cognitive procedure on Earth that passes the standard of “Nobody ever might have been using this cognitive procedure at the time they made $mistake?” That more than three human beings have ever used? I think when we’re casting this kind of shade we ought to be pretty darned sure, preferably in the form of prior documentation that we think was honest, about what thought process was going on at the time.
Why require surety, when we can reason statistically? There’ve been maybe ten comparably-sized frauds ever, so on expectation, hardline act utilitarians like Sam have been responsible for 5% of the worst frauds, while they represent maybe 1/50M of the world’s population (based on what I know of his views 5-10yrs ago). So we get a risk ratio of about a million to 1, more than enough to worry about.
Anyway, perhaps it’s not worth arguing, since it might become clearer over time what his philosophical commitments were.
I guess it’s some new evidence that one person was maybe using act utilitarianism as a decision procedure and messed up? Also not theoretically impossible he was correct in his assessment of the possible outcomes, chose the higher EV option, and we just ended up in one of the bad outcome worlds.
But even if we conservatively say that it’s a 50% chance that he was using act utilitarianism as his decision procedure, that’s enough to consider it compromised, because it could lead to bad consequences.
I don’t understand this argument at all. I assume nobody thought it was literally impossible for the implementation of a moral theory (any moral theory!) to lead to bad consequences before. Maybe I’d understand your point more if you stated it quantitatively. Like:
“Previously, I thought it was x% likely that a random act utilitarian would be led by their philosophy to do worse stuff than if they’d endorsed most other moral theories. After seeing the case of SBF, I now think the probability is y% instead, because our sample size is small enough that a single data point can be a large update.”
Looks like Eliezer was similarly confused by your phrasing; your new argument (“almost no multibillion dollar frauds have ever happened, so we should do a very large update about the badness of everything that might have contributed to SBF defrauding people”) sounds very different, and makes more sense to me, though I suspect it won’t end up working.
I totally agree. But even if we conservatively say that it’s a 50% chance that he was using act utilitarianism as his decision procedure, that’s enough to consider it compromised, because it could lead to
bad consequencesmultiple billions of dollars of damages (edited).There are also subtler issues: if you intend to be act utilitarian but aren’t and do harm, that’s still an argument against intending to use the decision procedure. And if someone says they’re act utilitarian but aren’t and does harm, that’s an argument against trusting people who say they’re act utilitarian.
Not trying to take this out on you, but I’m annoyed by how much all this advocacy of deontology all of sudden overlaps with covering our own asses. I don’t buy it as a massive update about morality or psychology from the events themselves but a massive update about optics.
Reposting from twitter: It’s a moderate update on the prevalence of naive utilitarians among EAs.
Expanded:
Classical problem with this debate on utilitarianism is the vocabulary used makes motte-and-bailey defense of utilitarianism too easy.
1. Someone points to a bunch problems with a act consequentialist decision procedure / cases where naive consequentialism tells you to do bad things
2. The default response is “but this is naive consequentialism, no one actually does that”
3. You may wonder that while people don’t advocate for or self-identify as naive utilitarians … they actually make the mistakes
The case provides some evidence that the problems can actually happen in practice in important enough situations to care. [*]
Also, you have the problem that sophisticated naive consequentialists could be tempted to lie to you about their morality (“no worries, you can trust me, I’m following the sensible deontic constraints!”). Personally, before the recent FTX happenings, I would be more of the opinion “nah, this sounds too much like an example from a philosophical paper, unlikely with typical human psychology ”. Now I take it as more real problem.
[*] What I’m actually worried about …
Effective altruism motivated thousands of people to move into highly leveraged domains, with large and potentially deadly consequences—powerful AI stuff, pandemics, epistemic tech. I think that if just 15% of them believe in some form of hardcore utilitarianism where you drop integrity constrains and trust your human brain ability to evaluate when to be constrained and when not, it’s … actually a problem?
I’d agree with this statement more if it acknowledged the extent to which most human minds have the kind of propositional separation between “morality” and “optics” that obtained financially between FTX and Alameda.
This will be a relief if true. I am much more worried about people not having principles (or their principles guided by something other than morality) than people being overly concerned about optics. The latter is a tactical concern (albeit a big one) and hopefully fixable, the former is evidence that people in our movement is too conformist or otherwise too weak or too evil to confront moral catastrophes.
I don’t think they know they are concerned about optics. My suspicion was that the bad optics suddenly made utilitarian ideas seem false or reckless.
This strikes me as a bad play of “if there was even a chance”. Is there any cognitive procedure on Earth that passes the standard of “Nobody ever might have been using this cognitive procedure at the time they made $mistake?” That more than three human beings have ever used? I think when we’re casting this kind of shade we ought to be pretty darned sure, preferably in the form of prior documentation that we think was honest, about what thought process was going on at the time.
Why require surety, when we can reason statistically? There’ve been maybe ten comparably-sized frauds ever, so on expectation, hardline act utilitarians like Sam have been responsible for 5% of the worst frauds, while they represent maybe 1/50M of the world’s population (based on what I know of his views 5-10yrs ago). So we get a risk ratio of about a million to 1, more than enough to worry about.
Anyway, perhaps it’s not worth arguing, since it might become clearer over time what his philosophical commitments were.
I guess it’s some new evidence that one person was maybe using act utilitarianism as a decision procedure and messed up? Also not theoretically impossible he was correct in his assessment of the possible outcomes, chose the higher EV option, and we just ended up in one of the bad outcome worlds.
I don’t understand this argument at all. I assume nobody thought it was literally impossible for the implementation of a moral theory (any moral theory!) to lead to bad consequences before. Maybe I’d understand your point more if you stated it quantitatively. Like:
“Previously, I thought it was x% likely that a random act utilitarian would be led by their philosophy to do worse stuff than if they’d endorsed most other moral theories. After seeing the case of SBF, I now think the probability is y% instead, because our sample size is small enough that a single data point can be a large update.”
Looks like Eliezer was similarly confused by your phrasing; your new argument (“almost no multibillion dollar frauds have ever happened, so we should do a very large update about the badness of everything that might have contributed to SBF defrauding people”) sounds very different, and makes more sense to me, though I suspect it won’t end up working.
I think you’re right—I could have avoided some confusion if I said it could lead to “multi-billion-dollar-level bad consequences”. Edited to clarify.