I don’t think these are complex questions! If your minimalist axiology ranks based on states of the world (and not actions except inasmuch as they lead to states of the world), then the best possible value to achieve is zero. Assuming this is achieved by an empty universe, then there is nothing strictly better than taking an action that creates an empty universe forever! This is a really easy to prove theorem!
I believe that it’s a complex question whether or not this should be a dealbreaker for adopting a minimalist axiology, but that’s not the question you wrote down. The answers to
Would an empty world (i.e. a world without sentient beings) be axiologically perfect?
For any hypothetical world, would the best outcome always be realized by pressing a button that leads to its instant cessation?
really are just straightforwardly “yes”, for state-based minimalist axiologies where an empty universe has none of the thing you want to minimize, which is the thing you are analyzing in this post unless I have totally misread it.
Hi Rohin; I apologize for being vague and implicit; I agree that the first question is not complex, and I should’ve clarified that I’m primarily responding to the related (but in the post, almost completely implicit) worries which I think are much more complex than the literal questions are. You helped me realize just now that the post may look like it’s primarily answering the written-down questions, even though the main reason for all my elaboration (on the assumptions, possible biases, comparison with offsetting views, etc.) was to respond to the implicit worries.
Regarding whether the answers to the first two questions are straightforwardly “yes”, I would still note that such a one-word answer would lack the nuance that is present in what Magnus wrote above (and which I noted already in the overview because I think it’s relevant for the worries).
I don’t think these are complex questions! If your minimalist axiology ranks based on states of the world (and not actions except inasmuch as they lead to states of the world), then the best possible value to achieve is zero. Assuming this is achieved by an empty universe, then there is nothing strictly better than taking an action that creates an empty universe forever! This is a really easy to prove theorem!
I believe that it’s a complex question whether or not this should be a dealbreaker for adopting a minimalist axiology, but that’s not the question you wrote down. The answers to
really are just straightforwardly “yes”, for state-based minimalist axiologies where an empty universe has none of the thing you want to minimize, which is the thing you are analyzing in this post unless I have totally misread it.
Hi Rohin; I apologize for being vague and implicit; I agree that the first question is not complex, and I should’ve clarified that I’m primarily responding to the related (but in the post, almost completely implicit) worries which I think are much more complex than the literal questions are. You helped me realize just now that the post may look like it’s primarily answering the written-down questions, even though the main reason for all my elaboration (on the assumptions, possible biases, comparison with offsetting views, etc.) was to respond to the implicit worries.
Regarding whether the answers to the first two questions are straightforwardly “yes”, I would still note that such a one-word answer would lack the nuance that is present in what Magnus wrote above (and which I noted already in the overview because I think it’s relevant for the worries).
(I’ll continue a bit under your other comment.)