The 3% figure for utilitarianism strikes me as a bit misleading on it’s own, given what else Will said. (I’m not accusing Will of intent to mislead here, he said something very precise that I, as a philosopher, entirely followed, it was just a bit complicated for lay people.) Firstly, he said a lot of the probability space was taken up by error theory, the view that there is no true morality. So to get what Will himself endorses, whether or not there is a true morality, you have to basically subtract an unknown but large amount for his credence in error theory from 1, and then renormalize his other credence so that they add up to 1 on their own. Secondly, there’s the difference between utilitarianism where only the consequences of your actions matter morally, and only consequences for (total or average) pain and pleasure and/or fulfilled preferences matter as consequence, and consequentialism where only the consequences of your actions matter morally, but it’s left open what those consequences are. My memory of the podcast (could be wrong, only listened once!) is that Will said that, conditional on error theory being false, his credence in consequentialism, is about 0.5. This really matters in the current context, because many non-utilitarian forms of consequentialism can also promote maximizing in a dangerous way, they just disagree with utilitarianism about exactly what you are maximizing. So really, Will’s credence in a view that, interpreted naively recommends dangerous maximizing is functionally (i.e. ignoring error theory in practice) more like 0.5 than 0.03, as I understood him in the podcast. Of course, he isn’t actually recommending dangerous max-ing regardless of his credence in consequentialism (at least in most contexts*), because he warns against naivety.
(Actually, my personal suspicion is that ‘consequentialism’ on its own is basically vacuous, because any view gives a moral preferability ordering over choices in situations, and really all that the numbers in consequentialism do is help us represent such orderings in a quick and easily manipulable manner, but that’s a separate debate.)
*Presumably sometimes dangerous, unethical-looking maximizing actually is best from a consequentialist point of view, because the dangers of not doing so, or the upside of doing so if you are right about the consequences of your options outweigh the risk that you are wrong about the consequences of different options, even when you take into account higher-order evidence that people who think intuitively bad actions maximize utility are nearly always wrong.
My memory of the podcast (could be wrong, only listened once!) is that Will said that, conditional on error theory being false, his credence in consequentialism, is about 0.5.
I think he meant conditional on error theory being false, and also on not “some moral view we’ve never thought of”.
Here’s a quote of what Will said starting at 01:31:21: “But yeah, I tried to work through my credences once and I think I ended up in like 3% in utilitarianism or something like. I mean large factions go to, you know, people often very surprised by this, but large factions go to, you know, to error theory. So there’s just no correct moral view. Very large faction to like some moral view we’ve never thought of. But even within positive moral views, and like 50-50 on non consequentialism or consequentialism, most people are not consequentialists. I don’t think I’m.”
Overall it seems like Will’s moral views are pretty different from SBF’s (or what SBF presented to Will as his moral views), so I’m still kind of puzzled about how they interacted with each other.
The 3% figure for utilitarianism strikes me as a bit misleading on it’s own, given what else Will said. (I’m not accusing Will of intent to mislead here, he said something very precise that I, as a philosopher, entirely followed, it was just a bit complicated for lay people.) Firstly, he said a lot of the probability space was taken up by error theory, the view that there is no true morality. So to get what Will himself endorses, whether or not there is a true morality, you have to basically subtract an unknown but large amount for his credence in error theory from 1, and then renormalize his other credence so that they add up to 1 on their own. Secondly, there’s the difference between utilitarianism where only the consequences of your actions matter morally, and only consequences for (total or average) pain and pleasure and/or fulfilled preferences matter as consequence, and consequentialism where only the consequences of your actions matter morally, but it’s left open what those consequences are. My memory of the podcast (could be wrong, only listened once!) is that Will said that, conditional on error theory being false, his credence in consequentialism, is about 0.5. This really matters in the current context, because many non-utilitarian forms of consequentialism can also promote maximizing in a dangerous way, they just disagree with utilitarianism about exactly what you are maximizing. So really, Will’s credence in a view that, interpreted naively recommends dangerous maximizing is functionally (i.e. ignoring error theory in practice) more like 0.5 than 0.03, as I understood him in the podcast. Of course, he isn’t actually recommending dangerous max-ing regardless of his credence in consequentialism (at least in most contexts*), because he warns against naivety.
(Actually, my personal suspicion is that ‘consequentialism’ on its own is basically vacuous, because any view gives a moral preferability ordering over choices in situations, and really all that the numbers in consequentialism do is help us represent such orderings in a quick and easily manipulable manner, but that’s a separate debate.)
*Presumably sometimes dangerous, unethical-looking maximizing actually is best from a consequentialist point of view, because the dangers of not doing so, or the upside of doing so if you are right about the consequences of your options outweigh the risk that you are wrong about the consequences of different options, even when you take into account higher-order evidence that people who think intuitively bad actions maximize utility are nearly always wrong.
I think he meant conditional on error theory being false, and also on not “some moral view we’ve never thought of”.
Here’s a quote of what Will said starting at 01:31:21: “But yeah, I tried to work through my credences once and I think I ended up in like 3% in utilitarianism or something like. I mean large factions go to, you know, people often very surprised by this, but large factions go to, you know, to error theory. So there’s just no correct moral view. Very large faction to like some moral view we’ve never thought of. But even within positive moral views, and like 50-50 on non consequentialism or consequentialism, most people are not consequentialists. I don’t think I’m.”
Overall it seems like Will’s moral views are pretty different from SBF’s (or what SBF presented to Will as his moral views), so I’m still kind of puzzled about how they interacted with each other.
’also on not “some moral view we’ve never thought of”.’
Oh, actually, that’s right. That does change things a bit.