I think my position is super robust to future developments. Feel free to suggest bets. Given my empirical beliefs, I just do not see how the most cost-effective interventions can include âsupervillain type stuffâ. For example, conditional on me having a 50 % chance of killing all life, I would be super powerful, and therefore have way better options available to increase welfare (even if I thought life was negative).
I would recommend killing everyone if this was implied by ETHU. I am less confident about killing everyone being bad than that negative conscious experiences are bad, and that positive conscious experiences are good.
I said a 50â50 chance of getting caught for trying to make a pathogen that brings about human extinction, not a 50â50 chance of successfully killing all life (far, far harder.).
Feasibility depends on that kind of biotech being achievable for small non-expert groups obviously, or at least small groups only some of whom are experts. But even if it is not feasible, and your position is in fact robust, I think the broader point that I donât really believe that you would actually kill everyone in that situation remains.
Sorry. I have removed â as you suggestâ from my past comment. In any case, I think a greater feasibility of killing all life means there are more options available to increase welfare, such that is it increasingly unlikely that killing all life is the best one.
I think my position is super robust to future developments. Feel free to suggest bets. Given my empirical beliefs, I just do not see how the most cost-effective interventions can include âsupervillain type stuffâ. For example, conditional on me having a 50 % chance of killing all life, I would be super powerful, and therefore have way better options available to increase welfare (even if I thought life was negative).
I would recommend killing everyone if this was implied by ETHU. I am less confident about killing everyone being bad than that negative conscious experiences are bad, and that positive conscious experiences are good.
I said a 50â50 chance of getting caught for trying to make a pathogen that brings about human extinction, not a 50â50 chance of successfully killing all life (far, far harder.).
Feasibility depends on that kind of biotech being achievable for small non-expert groups obviously, or at least small groups only some of whom are experts. But even if it is not feasible, and your position is in fact robust, I think the broader point that I donât really believe that you would actually kill everyone in that situation remains.
Sorry. I have removed â as you suggestâ from my past comment. In any case, I think a greater feasibility of killing all life means there are more options available to increase welfare, such that is it increasingly unlikely that killing all life is the best one.