Reposting from twitter: It’s a moderate update on the prevalence of naive utilitarians among EAs.
Expanded:
Classical problem with this debate on utilitarianism is the vocabulary used makes motte-and-bailey defense of utilitarianism too easy. 1. Someone points to a bunch problems with a act consequentialist decision procedure / cases where naive consequentialism tells you to do bad things 2. The default response is “but this is naive consequentialism, no one actually does that” 3. You may wonder that while people don’t advocate for or self-identify as naive utilitarians … they actually make the mistakes
The case provides some evidence that the problems can actually happen in practice in important enough situations to care. [*]
Also, you have the problem that sophisticated naive consequentialists could be tempted to lie to you about their morality (“no worries, you can trust me, I’m following the sensible deontic constraints!”). Personally, before the recent FTX happenings, I would be more of the opinion “nah, this sounds too much like an example from a philosophical paper, unlikely with typical human psychology ”. Now I take it as more real problem.
[*] What I’m actually worried about …
Effective altruism motivated thousands of people to move into highly leveraged domains, with large and potentially deadly consequences—powerful AI stuff, pandemics, epistemic tech. I think that if just 15% of them believe in some form of hardcore utilitarianism where you drop integrity constrains and trust your human brain ability to evaluate when to be constrained and when not, it’s … actually a problem?
Reposting from twitter: It’s a moderate update on the prevalence of naive utilitarians among EAs.
Expanded:
Classical problem with this debate on utilitarianism is the vocabulary used makes motte-and-bailey defense of utilitarianism too easy.
1. Someone points to a bunch problems with a act consequentialist decision procedure / cases where naive consequentialism tells you to do bad things
2. The default response is “but this is naive consequentialism, no one actually does that”
3. You may wonder that while people don’t advocate for or self-identify as naive utilitarians … they actually make the mistakes
The case provides some evidence that the problems can actually happen in practice in important enough situations to care. [*]
Also, you have the problem that sophisticated naive consequentialists could be tempted to lie to you about their morality (“no worries, you can trust me, I’m following the sensible deontic constraints!”). Personally, before the recent FTX happenings, I would be more of the opinion “nah, this sounds too much like an example from a philosophical paper, unlikely with typical human psychology ”. Now I take it as more real problem.
[*] What I’m actually worried about …
Effective altruism motivated thousands of people to move into highly leveraged domains, with large and potentially deadly consequences—powerful AI stuff, pandemics, epistemic tech. I think that if just 15% of them believe in some form of hardcore utilitarianism where you drop integrity constrains and trust your human brain ability to evaluate when to be constrained and when not, it’s … actually a problem?