I’m not sure if this is the right place to ask this, but does anyone know what point Paul’s trying to make in the following part of this podcast? (Relevant section starts around 1:44:00)
Suppose you have a P probability of the best thing you can do and a one-minus P probably the worst thing you can do, what does P have to be so it’s the difference between that and the barren universe. I think most of my probability is distributed between you would need somewhere between 50% and 99% chance of good things and then put some probability or some credence on views where that number is a quadrillion times larger or something in which case it’s definitely going to dominate. A quadrillion is probably too big a number, but very big numbers. Numbers easily large enough to swamp the actual probabilities involved
[ . . . ]
I think that those arguments are a little bit complicated, how do you get at these? I think to clarify the basic position, the reason that you end up concluding it’s worse is just like conceal your intuition about how bad the worst thing that can happen to a person is vs the best thing or damn, the worst thing seems pretty bad and then the like first-pass responses, sort of have this debunking understanding, or we understand causally how it is that we ended up with this kind of preference with respect to really bad stuff versus really good stuff.
If you look at what happens over evolutionary history. What is the range of things that can happen to an organism and how should an organism be trading off like best possible versus worst possible outcomes. Then you end up into well, to what extent is that a debunking explanation that explains why humans in terms of their capacity to experience joy and suffering are unbiased but the reality is still biased versus to what extent is this then fundamentally reflected in our preferences about good and bad things. I think it’s just a really hard set of questions. I could easily imagine maybe shifting on them with much more deliberation.
It seems like an important topic but I’m a bit confused by what he’s saying here. Is the perspective he’s discussing (and puts non-negligible probability on) one that states that the worst possible suffering is a bajillion times worse than the best possible pleasure, and wouldn’t that suggest every human’s life is net-negative (even if your credence on this being the case is ~.1%)? Or is this just discussing the energy-efficiency of ‘hedonium’ and ‘dolorium’, which could potentially be dealt with by some sort of limitation on compute?
Also, I’m not really sure if this set of views is more “a broken bone/waterboarding is a million times as morally pressing as making a happy person”, or along the more empirical lines of “most suffering (e.g. waterboarding) is extremely light, humans can experience far far far far far^99 times worse; and pleasure doesn’t scale to the same degree.” Even a tiny chance of the second one being true is awful to contemplate.
Specifically:
Then you end up into well, to what extent is that a debunking explanation that explains why humans in terms of their capacity to experience joy and suffering are unbiased but the reality is still biased
I’m not really sure what’s meant by “the reality” here, nor what’s meant by biased. Is the assertion that humans’ intuitive preferences are driven by the range of possible things that could happen in the ancestral environment & that this isn’t likely to match the maximum possible pleasure vs. suffering ratio in the future? If so, how does this lead one to end up concluding it’s worse (rather than better)? I’m not really sure how these arguments connect in a way that could lead one to conclude that the worst possible suffering is a quadrillion times as bad as the best bliss is good.
As I understand it, he gives two possibilities. 1. Our capacity for happiness is symmetric while our “reality” (i.e. humanity’s historical environment) has been asymmetric. 2. Our preferences themselves were asymmetric, because we were “trained” to suffer more from adverse events, making us have greater capacity for suffering. (1) gives more reason for optimism than (2) because we are more able to change the environment than our capability for happiness/suffering.
FWIW, I think we might be able to change our capability for happiness/suffering too, and so thinking along these lines, the question might ultimately hang on energy efficiency arguments anyway.
Cheers for the response; I’m still a bit puzzled as to how this reasoning would lead to the ratio being as extreme as 1:a million/bajillion/quadrillion, which he mentions as something he puts some non-negligible credence on (which confuses me as even a small probability of this being the case would surely dominate & make the future net-negative.)
It could be very extreme in case (2) if for some reason you think that the worse suffering is a million times worse than the best happiness (maybe you are imagining severe torture) but I agree that this seems implausibly extreme. Re how to weigh the different possibilities, it depends whether you: 1) scale it as +1 vs 1M, 2) scale it as +1 vs 1/1M, or 3) give both models equal vote in a moral parliament.
(X-posting from LW open thread)
I’m not sure if this is the right place to ask this, but does anyone know what point Paul’s trying to make in the following part of this podcast? (Relevant section starts around 1:44:00)
It seems like an important topic but I’m a bit confused by what he’s saying here. Is the perspective he’s discussing (and puts non-negligible probability on) one that states that the worst possible suffering is a bajillion times worse than the best possible pleasure, and wouldn’t that suggest every human’s life is net-negative (even if your credence on this being the case is ~.1%)? Or is this just discussing the energy-efficiency of ‘hedonium’ and ‘dolorium’, which could potentially be dealt with by some sort of limitation on compute?
Also, I’m not really sure if this set of views is more “a broken bone/waterboarding is a million times as morally pressing as making a happy person”, or along the more empirical lines of “most suffering (e.g. waterboarding) is extremely light, humans can experience far far far far far^99 times worse; and pleasure doesn’t scale to the same degree.” Even a tiny chance of the second one being true is awful to contemplate.
Specifically:
I’m not really sure what’s meant by “the reality” here, nor what’s meant by biased. Is the assertion that humans’ intuitive preferences are driven by the range of possible things that could happen in the ancestral environment & that this isn’t likely to match the maximum possible pleasure vs. suffering ratio in the future? If so, how does this lead one to end up concluding it’s worse (rather than better)? I’m not really sure how these arguments connect in a way that could lead one to conclude that the worst possible suffering is a quadrillion times as bad as the best bliss is good.
As I understand it, he gives two possibilities. 1. Our capacity for happiness is symmetric while our “reality” (i.e. humanity’s historical environment) has been asymmetric. 2. Our preferences themselves were asymmetric, because we were “trained” to suffer more from adverse events, making us have greater capacity for suffering. (1) gives more reason for optimism than (2) because we are more able to change the environment than our capability for happiness/suffering.
FWIW, I think we might be able to change our capability for happiness/suffering too, and so thinking along these lines, the question might ultimately hang on energy efficiency arguments anyway.
Cheers for the response; I’m still a bit puzzled as to how this reasoning would lead to the ratio being as extreme as 1:a million/bajillion/quadrillion, which he mentions as something he puts some non-negligible credence on (which confuses me as even a small probability of this being the case would surely dominate & make the future net-negative.)
It could be very extreme in case (2) if for some reason you think that the worse suffering is a million times worse than the best happiness (maybe you are imagining severe torture) but I agree that this seems implausibly extreme. Re how to weigh the different possibilities, it depends whether you: 1) scale it as +1 vs 1M, 2) scale it as +1 vs 1/1M, or 3) give both models equal vote in a moral parliament.