I am not very experienced in philosophy but I have a question.
You present a problem that needs solving: The funnel-shaped action profiles lead to undefined expected utility. You say that this conclusion means that we must adjust our reasoning so that we don’t get this conclusion.
But why do you assume that this cannot simply be the correct conclusion from utilitarianism? Can we not say that we have taken the principle axioms of utilitarianism and, through correct logical steps, deduced a truth (from the axiomatic truths) that expected utility is undefined for all our decisions?
To me the next step after reaching this point would not be to change my reasoning (which requires assuming that the logical processes applied were incorrect, no?) but rather to reject the axioms of utilitarianism, since they have rendered themselves ethically useless.
I have a fundamental ethical reasoning that I would guess is pretty common here? It is this: Given what we know about our deterministic (and maybe probabilistic) universe, there is nothing to suggest any existence of such things as good/bad or right choices/wrong choices and we come to the conclusion that nothing matters. However this is obviously useless and if nothing matters anyway then we might as well live by a kind of “next best” ethical philosophy that does provide us with right/wrong choices, just in case of the minuscule chance it is indeed correct.
However you seem to have suggested that utilitarianism just takes you back to the “nothing matters” situation which would mean we have to go to the “next next best” ethical philosophy.
Hmm I just realised your post has fundamentally changed every ethical decision of my life...
It would be greatly appreciated if anyone answers my question not only the OP, thanks!
Welcome to the fantastic world of philosophy, friend! :) If you are like me you will enjoy thinking and learning more about this stuff. Your mind will be blown many times over.
I do in fact think that utilitarianism as normally conceived is just wrong, and one reason why it is wrong is that it says every action is equally choiceworthy because they all have undefined expected utility.
But maybe there is a way to reconceive utilitarianism that avoids this problem. Maybe.
Personally I think you might be interested in thinking about metaethics next. What do we even mean when we say something matters, or something is good? I currently think that it’s something like “what I would choose, if I was idealized in various ways, e.g. if I had more time to think and reflect, if I knew more relevant facts, etc.”
Huh it’s concerning that you say you see standard utilitarianism as wrong because I have no idea what to believe if not utilitarianism.
Do you know where I can find out more about the “undefined” issue? For me this is pretty much the most important thing for me to understand since my conclusion will fundamentally determine my behaviour for the rest of my life, yet I can’t find any information except for your posts.
Thanks so much for your response and posts. They’ve been hugely helpful to me
The philosophy literature has stuff on this. If I recall correctly I linked some of it in the bibliography of this post. It’s been a while since I thought about this I’m afraid so I don’t have references in memory. Probably you should search the Stanford Encyclopedia of Philosophy for the “Pasadena Game” and “st petersburg paradox”
I am not very experienced in philosophy but I have a question.
You present a problem that needs solving: The funnel-shaped action profiles lead to undefined expected utility. You say that this conclusion means that we must adjust our reasoning so that we don’t get this conclusion.
But why do you assume that this cannot simply be the correct conclusion from utilitarianism? Can we not say that we have taken the principle axioms of utilitarianism and, through correct logical steps, deduced a truth (from the axiomatic truths) that expected utility is undefined for all our decisions?
To me the next step after reaching this point would not be to change my reasoning (which requires assuming that the logical processes applied were incorrect, no?) but rather to reject the axioms of utilitarianism, since they have rendered themselves ethically useless.
I have a fundamental ethical reasoning that I would guess is pretty common here? It is this: Given what we know about our deterministic (and maybe probabilistic) universe, there is nothing to suggest any existence of such things as good/bad or right choices/wrong choices and we come to the conclusion that nothing matters. However this is obviously useless and if nothing matters anyway then we might as well live by a kind of “next best” ethical philosophy that does provide us with right/wrong choices, just in case of the minuscule chance it is indeed correct.
However you seem to have suggested that utilitarianism just takes you back to the “nothing matters” situation which would mean we have to go to the “next next best” ethical philosophy.
Hmm I just realised your post has fundamentally changed every ethical decision of my life...
It would be greatly appreciated if anyone answers my question not only the OP, thanks!
Welcome to the fantastic world of philosophy, friend! :) If you are like me you will enjoy thinking and learning more about this stuff. Your mind will be blown many times over.
I do in fact think that utilitarianism as normally conceived is just wrong, and one reason why it is wrong is that it says every action is equally choiceworthy because they all have undefined expected utility.
But maybe there is a way to reconceive utilitarianism that avoids this problem. Maybe.
Personally I think you might be interested in thinking about metaethics next. What do we even mean when we say something matters, or something is good? I currently think that it’s something like “what I would choose, if I was idealized in various ways, e.g. if I had more time to think and reflect, if I knew more relevant facts, etc.”
Huh it’s concerning that you say you see standard utilitarianism as wrong because I have no idea what to believe if not utilitarianism.
Do you know where I can find out more about the “undefined” issue? For me this is pretty much the most important thing for me to understand since my conclusion will fundamentally determine my behaviour for the rest of my life, yet I can’t find any information except for your posts.
Thanks so much for your response and posts. They’ve been hugely helpful to me
The philosophy literature has stuff on this. If I recall correctly I linked some of it in the bibliography of this post. It’s been a while since I thought about this I’m afraid so I don’t have references in memory. Probably you should search the Stanford Encyclopedia of Philosophy for the “Pasadena Game” and “st petersburg paradox”