Cheers for the response; I’m still a bit puzzled as to how this reasoning would lead to the ratio being as extreme as 1:a million/bajillion/quadrillion, which he mentions as something he puts some non-negligible credence on (which confuses me as even a small probability of this being the case would surely dominate & make the future net-negative.)
It could be very extreme in case (2) if for some reason you think that the worse suffering is a million times worse than the best happiness (maybe you are imagining severe torture) but I agree that this seems implausibly extreme. Re how to weigh the different possibilities, it depends whether you: 1) scale it as +1 vs 1M, 2) scale it as +1 vs 1/1M, or 3) give both models equal vote in a moral parliament.
Cheers for the response; I’m still a bit puzzled as to how this reasoning would lead to the ratio being as extreme as 1:a million/bajillion/quadrillion, which he mentions as something he puts some non-negligible credence on (which confuses me as even a small probability of this being the case would surely dominate & make the future net-negative.)
It could be very extreme in case (2) if for some reason you think that the worse suffering is a million times worse than the best happiness (maybe you are imagining severe torture) but I agree that this seems implausibly extreme. Re how to weigh the different possibilities, it depends whether you: 1) scale it as +1 vs 1M, 2) scale it as +1 vs 1/1M, or 3) give both models equal vote in a moral parliament.