This is an interesting idea. One unstated assumption I don’t agree with is the assumption that Boltzmann brains would experience more negative experiences than positive ones. In order to justify destroying the possibility of something existing, one would need to prove it would experience negative experiences that outweigh the positive experiences. Human brains experience pain more intensely than pleasure because they are adapted for an environment which harshly punishes mistakes. Boltzmann brains would form randomly, so their pleasure to pain balance would be random, thus making it about an even ratio. In that case, we have no idea what the net effect would be.
That’s a really good point. I’m inclined to think that there’s an asymmetry that tips the balance towards suffering, like maybe the fact that they only exist for fractions of a second is distressing, or maybe pleasure requires a more fine tuned structure. But it’s hard to avoid anthropomorphizing the randomly generated brains, so my intuitions might not be correct. There’s also the whole negative utilitarian vs total utilitarian debate.
This is an interesting idea. One unstated assumption I don’t agree with is the assumption that Boltzmann brains would experience more negative experiences than positive ones. In order to justify destroying the possibility of something existing, one would need to prove it would experience negative experiences that outweigh the positive experiences. Human brains experience pain more intensely than pleasure because they are adapted for an environment which harshly punishes mistakes. Boltzmann brains would form randomly, so their pleasure to pain balance would be random, thus making it about an even ratio. In that case, we have no idea what the net effect would be.
That’s a really good point. I’m inclined to think that there’s an asymmetry that tips the balance towards suffering, like maybe the fact that they only exist for fractions of a second is distressing, or maybe pleasure requires a more fine tuned structure. But it’s hard to avoid anthropomorphizing the randomly generated brains, so my intuitions might not be correct. There’s also the whole negative utilitarian vs total utilitarian debate.