Thanks, David. I would still fully endorse expectational total hedonistic utilitarianism (ETHU) even if this implied some âsupervillain type stuffâ among the most cost-effective actions. However, in practice, I am not aware of any seemingly villainous actions that follow from fully endorsing ETHU. In general, âsupervillain type stuffâ increases oneâs risk of going to prison, and therefore decreases expected future working time and donations, which I think are the major ways one can contribute to a better world.
I think this is fairly fragile to future developments. What if you become convinced humans do more harm than good and also itâs a short window of time after it becomes technically possible for laypeople to make a doomsday virus in their garage, and before the government regulates to prevent this? It seems like, even if you have a 50â50 chance of being caught or whatever, the value of succeeding on your credences will be so high, that the expected value on your credence of trying to kill everyone might well still be net positive.
I think I sometimes get an unreal vibe from some of your writing not because I actually think there is a danger you would kill everyone, but because I think its obvious you probably wouldnât, and so you donât really fully endorse ETHU in every feasible situation.
I think my position is super robust to future developments. Feel free to suggest bets. Given my empirical beliefs, I just do not see how the most cost-effective interventions can include âsupervillain type stuffâ. For example, conditional on me having a 50 % chance of killing all life, I would be super powerful, and therefore have way better options available to increase welfare (even if I thought life was negative).
I would recommend killing everyone if this was implied by ETHU. I am less confident about killing everyone being bad than that negative conscious experiences are bad, and that positive conscious experiences are good.
I said a 50â50 chance of getting caught for trying to make a pathogen that brings about human extinction, not a 50â50 chance of successfully killing all life (far, far harder.).
Feasibility depends on that kind of biotech being achievable for small non-expert groups obviously, or at least small groups only some of whom are experts. But even if it is not feasible, and your position is in fact robust, I think the broader point that I donât really believe that you would actually kill everyone in that situation remains.
Sorry. I have removed â as you suggestâ from my past comment. In any case, I think a greater feasibility of killing all life means there are more options available to increase welfare, such that is it increasingly unlikely that killing all life is the best one.
Thanks, David. I would still fully endorse expectational total hedonistic utilitarianism (ETHU) even if this implied some âsupervillain type stuffâ among the most cost-effective actions. However, in practice, I am not aware of any seemingly villainous actions that follow from fully endorsing ETHU. In general, âsupervillain type stuffâ increases oneâs risk of going to prison, and therefore decreases expected future working time and donations, which I think are the major ways one can contribute to a better world.
I think this is fairly fragile to future developments. What if you become convinced humans do more harm than good and also itâs a short window of time after it becomes technically possible for laypeople to make a doomsday virus in their garage, and before the government regulates to prevent this? It seems like, even if you have a 50â50 chance of being caught or whatever, the value of succeeding on your credences will be so high, that the expected value on your credence of trying to kill everyone might well still be net positive.
I think I sometimes get an unreal vibe from some of your writing not because I actually think there is a danger you would kill everyone, but because I think its obvious you probably wouldnât, and so you donât really fully endorse ETHU in every feasible situation.
I think my position is super robust to future developments. Feel free to suggest bets. Given my empirical beliefs, I just do not see how the most cost-effective interventions can include âsupervillain type stuffâ. For example, conditional on me having a 50 % chance of killing all life, I would be super powerful, and therefore have way better options available to increase welfare (even if I thought life was negative).
I would recommend killing everyone if this was implied by ETHU. I am less confident about killing everyone being bad than that negative conscious experiences are bad, and that positive conscious experiences are good.
I said a 50â50 chance of getting caught for trying to make a pathogen that brings about human extinction, not a 50â50 chance of successfully killing all life (far, far harder.).
Feasibility depends on that kind of biotech being achievable for small non-expert groups obviously, or at least small groups only some of whom are experts. But even if it is not feasible, and your position is in fact robust, I think the broader point that I donât really believe that you would actually kill everyone in that situation remains.
Sorry. I have removed â as you suggestâ from my past comment. In any case, I think a greater feasibility of killing all life means there are more options available to increase welfare, such that is it increasingly unlikely that killing all life is the best one.