I don’t see how the Simulation Hypothesis is a counterargument to EA, if you presume everyone else is still as “real” (I.e, simulated at the same level of detail) as you are. After all, you clearly have conscious experience, emotional valence, and so on, despite being a simulation—so does everyone else, so we should still help them live their best simulated lives. After all, whether one is a simulation or not, we can clearly feel the things we call pleasure and pain, happiness and sorrow, freedom and despair, so we clearly have moral worth in my worldview. Though we should also probably be working on some simulation-specific research as well, I don’t see how something like malaria nets would cease to be worthwhile.
Thanks for replying to my question. Your argument is certainly valid and an important one. But if we are to take the simulation hypothesis seriously it is only one within a spectrum of possible arguments that depend on the very nature of the simulation. For instance we might find out that our universe has been devised in such a twisted way that any improvement for its conscious beings corresponds to an unbearable proportional amount of pain for another parallel simulated universe. In such a case would pursuing effective altruism or longtermism still be moral?
Effective altruism is about doing the most good possible, so I’d say one can still pursue that under any circumstance. In the hypothetical you mentioned, the current form of EA would definitely be immoral in my opinion, because it is mostly about improving the lives of people in this universe, which would cause more suffering elsewhere and thus be wrong. So, in such a world, EA would have to look incredibly different—the optimal cause area would probably be to find a way to change the nature of our simulation, and we’d have to give up a lot of the things we do now because their net consequences would be bad.
That’s one of the best parts about EA in my opinion—it’s a question (How do we do the most good?) rather than an ideology. (You must do these things) Even if our current things turned out to be wrong, we could still pursue the question anew.
I agree with your approach to the question but perhaps if we really take the simulation hypothesis seriously (or at least consider it probable enough to concern us) the first step should be finding a way to tell whether or not we actually live in a simulation. Research in Physics/Astronomy could explicitly look for and device experiments looking to demonstrate systematic inconsistencies in the fabric of our universe that could give a hint on the made up nature of all laws. This in a way is an indirect answer to your last question. If effective altruisms is not an ideology just to be followed but a rational enterprise grounded on the actual nature of our universe, then it should also be concerned with improving our understanding of it. Even if this eventually leads to a radical re-think of what effective altruisms should be.
I agree. If the Simulation Hypothesis became decently likely, we would want to answer questions like:
- Does our simulation have a goal? If so, what? - Was our simulation likely created by humans?
Also, we’d probably want to be very careful with those experiments—observing existing inconsistencies makes sense, but deliberately trying to force the simulation into unlikely states seems like an existential risk to me—the last thing you want is to accidentally crash the simulation!
I don’t see how the Simulation Hypothesis is a counterargument to EA, if you presume everyone else is still as “real” (I.e, simulated at the same level of detail) as you are. After all, you clearly have conscious experience, emotional valence, and so on, despite being a simulation—so does everyone else, so we should still help them live their best simulated lives. After all, whether one is a simulation or not, we can clearly feel the things we call pleasure and pain, happiness and sorrow, freedom and despair, so we clearly have moral worth in my worldview. Though we should also probably be working on some simulation-specific research as well, I don’t see how something like malaria nets would cease to be worthwhile.
Thanks for replying to my question. Your argument is certainly valid and an important one. But if we are to take the simulation hypothesis seriously it is only one within a spectrum of possible arguments that depend on the very nature of the simulation. For instance we might find out that our universe has been devised in such a twisted way that any improvement for its conscious beings corresponds to an unbearable proportional amount of pain for another parallel simulated universe. In such a case would pursuing effective altruism or longtermism still be moral?
Effective altruism is about doing the most good possible, so I’d say one can still pursue that under any circumstance. In the hypothetical you mentioned, the current form of EA would definitely be immoral in my opinion, because it is mostly about improving the lives of people in this universe, which would cause more suffering elsewhere and thus be wrong. So, in such a world, EA would have to look incredibly different—the optimal cause area would probably be to find a way to change the nature of our simulation, and we’d have to give up a lot of the things we do now because their net consequences would be bad.
That’s one of the best parts about EA in my opinion—it’s a question (How do we do the most good?) rather than an ideology. (You must do these things) Even if our current things turned out to be wrong, we could still pursue the question anew.
I agree with your approach to the question but perhaps if we really take the simulation hypothesis seriously (or at least consider it probable enough to concern us) the first step should be finding a way to tell whether or not we actually live in a simulation. Research in Physics/Astronomy could explicitly look for and device experiments looking to demonstrate systematic inconsistencies in the fabric of our universe that could give a hint on the made up nature of all laws. This in a way is an indirect answer to your last question. If effective altruisms is not an ideology just to be followed but a rational enterprise grounded on the actual nature of our universe, then it should also be concerned with improving our understanding of it. Even if this eventually leads to a radical re-think of what effective altruisms should be.
I agree. If the Simulation Hypothesis became decently likely, we would want to answer questions like:
- Does our simulation have a goal? If so, what?
- Was our simulation likely created by humans?
Also, we’d probably want to be very careful with those experiments—observing existing inconsistencies makes sense, but deliberately trying to force the simulation into unlikely states seems like an existential risk to me—the last thing you want is to accidentally crash the simulation!