I have been tormented for several weeks by moral questions to which I find no answer and which prevent me from functioning normally and from sleeping.
They put me in a state of great psychological distress.
These ideas all revolve around Pascal’s wager, questions of infinite utility, of the “right” moral system and of the “right” way to make decisions.
Here are a few questions in bulk:
- A well-known religion forbids eating pork. By letting people around me eat pork, don’t I increase a little bit their probability of going to hell and don’t I make an infinitely bad decision?
- Are there moral systems that avoid negligible probabilities and are consistent
- Is it necessary to have first found the “right” moral system before making moral decisions
- If we think that a behavior has a greater chance of causing infinite unhappiness than of causing infinite happiness, should we prevent that behavior or rethink the case where we have stumbled on the initial probabilities?
Sorry if these questions are unclear or poorly posed, my brain has been completely obsessed with these ideas for the last 3-4 weeks and I really don’t know where I stand. Is there any theoretical content that could help me?
Thanks to those who will take the time to answer me :-)
[Question] Moral dilemma
Hello to all,
I have been tormented for several weeks by moral questions to which I find no answer and which prevent me from functioning normally and from sleeping.
They put me in a state of great psychological distress.
These ideas all revolve around Pascal’s wager, questions of infinite utility, of the “right” moral system and of the “right” way to make decisions.
Here are a few questions in bulk:
- A well-known religion forbids eating pork. By letting people around me eat pork, don’t I increase a little bit their probability of going to hell and don’t I make an infinitely bad decision?
- Are there moral systems that avoid negligible probabilities and are consistent
- Is it necessary to have first found the “right” moral system before making moral decisions
- If we think that a behavior has a greater chance of causing infinite unhappiness than of causing infinite happiness, should we prevent that behavior or rethink the case where we have stumbled on the initial probabilities?
Sorry if these questions are unclear or poorly posed, my brain has been completely obsessed with these ideas for the last 3-4 weeks and I really don’t know where I stand. Is there any theoretical content that could help me?
Thanks to those who will take the time to answer me :-)