Sorry, I donât find this is really speaking to my question?
I do not think the difficulty of decreasing a risk is independent of the value at stake. It is harder to decrease a risk when a larger value is at stake. So, in my mind, decreasing the nearterm risk of human extinction is astronomically easier than decreasing the risk of not achieving 10^50 lives of value, such that decreasing the former by e.g. 10^-10 leads to a relative increase in the latter much smaller than 10^-10.
I also think that youâre making some strong assumptions about things essentially cancelling out
Could you elaborate on why you think I am making a strong assumption in terms of questioning the following?
In light of the above, I expect what David Thorstad calls rapid diminution. I see the difference between the PDF after and before an intervention reducing the nearterm risk of human extinction as quickly decaying to 0, thus making the increase in the expected value of the astronomically valuable worlds negligible. For instance:
If the difference between the PDF after and before the intervention decays exponentially with the value of the future v, the increase in the value density caused by the intervention will be proportional to v*e^-v[4].
The above rapidly goes to 0 as v increases. For a value of the future equal to my expected value of 1.40*10^52 human lives, the increase in value density will multiply a factor of 1.40*10^52*e^(-1.40*10^52) = 10^(log10(1.40)*52 - log10(e)*1.40*10^52) = 10^(-6.08*10^51), i.e. it will be basically 0.
Do you think I am overestimating how fast the difference between the PDF after and before the intervention decays? As far as I can tell, the (posterior) counterfactual impact of interventions whose effects can be accurately measured, like ones in global health and development, decays to 0 as time goes by. I do not have a strong view on the particular shape of the difference, but exponential decay is quite typical in many contexts.
I do not think the difficulty of decreasing a risk is independent of the value at stake. It is harder to decrease a risk when a larger value is at stake.
This makes sense as a kind of general prior to come in with. Although note:
Itâs surely observational, not causalâthereâs no magic at play which means if you keep a scenario fixed except for changing the value at stake, this should impact the difficulty
One of the plausible generating mechanisms is having a broad altruistic market which takes the best opportunities, leaving no free lunchesâbut for some of the cases weâre discussing itâs unclear the market could have made it efficient
So, in my mind, decreasing the nearterm risk of human extinction is astronomically easier than decreasing the risk of not achieving 10^50 lives of value, such that decreasing the former by e.g. 10^-10 leads to a relative increase in the latter much smaller than 10^-10.
Now it looks to me as though youâre dogmatically sticking with the prior. Having come across the (kinda striking) observation which says âif thereâs a realistic chance of spreading to the stars, then premature human extinction would forgo astronomical valueâ, it seems like youâre saying âwell that would mean that the prior was wrong, so that observation canât be quite rightâ, and then reasoning from your prior to try to draw conclusions about the causal relationships there.
Whereas I feel that the prior reasonably justifies more scepticism in cases where more lives are at stake (and indeed, I do put a bunch of probability on âaverting near-term extinction doesnât save astronomical value for some reason or anotherâ, though the reasons tend to be ones where we never actually had a shot of an astronomically big future in the first place, and I think that thatâs sort of the appropriate target for scepticism), but doesnât give you anything strong enough to be confident about things.
(I certainly wouldnât be surprised if Iâm somehow misunderstanding what youâre doing; Iâm just responding to the picture Iâm getting from what youâve written.)
Now it looks to me as though youâre dogmatically sticking with the prior.
Are there any interventions whose estimates of (posterior) counterfactual impact do not decay to 0 in at most a few centuries? From my perspective, their absence establishes a strong prior against persistent longterm effects.
I do put a bunch of probability on âaverting near-term extinction doesnât save astronomical value for some reason or anotherâ, though the reasons tend to be ones where we never actually had a shot of an astronomically big future in the first place, and I think that thatâs sort of the appropriate target for scepticism
In general our ability to measure long term effects is kind of lousy. But if I wanted to look for interventions which donât have that decay pattern it would be most natural to think of conservation work saving species from extinction. Once weâve lost biodiversity, itâs essentially gone (maybe taking millions of years to build up again naturally). Conservation work can stop that. And with rises in conservation work over time itâs quite plausible that early saving species wonât just lead to them going extinct slightly later, but being preserved indefinitely.
I was not clear above, but I meant (posterior) counterfactual impact under expectedtotalhedonisticutilitarianism. Even if a species is counterfactually preserved indefinitely due to actions now, which I think would be very hard, I do not see how it would permanently increase wellbeing. In addition, I meant to ask for actual empirical evidence as opposed to hypothetical examples (e.g. of one species being saved and making an immortal conservationist happy indefinitely).
I think this is something where our ability to measure is just pretty bad, and in particular our ability to empirically detect whether the type of things that plausibly have long lasting counterfactual impacts actually do is pretty terrible.
I respond to that by saying âok I guess empirics arenât super helpful for the big picture question letâs try to build mechanistic understanding of things grounded wherever possible in empirics, as well as priors about what types of distributions occur when various different generating mechanisms are at playâ, whereas it sounds like youâre responding by saying something like âwell as a prior weâll just use the parts of the distribution we can actually measure, and assume that generalizes unless we get contradictory dataâ?
I respond to that by saying âok I guess empirics arenât super helpful for the big picture question letâs try to build mechanistic understanding of things grounded wherever possible in empirics, as well as priors about what types of distributions occur when various different generating mechanisms are at playâ, whereas it sounds like youâre responding by saying something like âwell as a prior weâll just use the parts of the distribution we can actually measure, and assume that generalizes unless we get contradictory dataâ?
Yes, that would be my reply. Thanks for clarifying.
Yeah, so I basically think that that response feels âspiritually frequentistâ, and is more likely to lead you to large errors than the approach I outlined (which feels more âspiritually Bayesianâ), especially in cases like this where weâre trying to extrapolate significantly beyond the data weâve been able to gather.
I do not think the difficulty of decreasing a risk is independent of the value at stake. It is harder to decrease a risk when a larger value is at stake. So, in my mind, decreasing the nearterm risk of human extinction is astronomically easier than decreasing the risk of not achieving 10^50 lives of value, such that decreasing the former by e.g. 10^-10 leads to a relative increase in the latter much smaller than 10^-10.
Could you elaborate on why you think I am making a strong assumption in terms of questioning the following?
Do you think I am overestimating how fast the difference between the PDF after and before the intervention decays? As far as I can tell, the (posterior) counterfactual impact of interventions whose effects can be accurately measured, like ones in global health and development, decays to 0 as time goes by. I do not have a strong view on the particular shape of the difference, but exponential decay is quite typical in many contexts.
This makes sense as a kind of general prior to come in with. Although note:
Itâs surely observational, not causalâthereâs no magic at play which means if you keep a scenario fixed except for changing the value at stake, this should impact the difficulty
One of the plausible generating mechanisms is having a broad altruistic market which takes the best opportunities, leaving no free lunchesâbut for some of the cases weâre discussing itâs unclear the market could have made it efficient
Now it looks to me as though youâre dogmatically sticking with the prior. Having come across the (kinda striking) observation which says âif thereâs a realistic chance of spreading to the stars, then premature human extinction would forgo astronomical valueâ, it seems like youâre saying âwell that would mean that the prior was wrong, so that observation canât be quite rightâ, and then reasoning from your prior to try to draw conclusions about the causal relationships there.
Whereas I feel that the prior reasonably justifies more scepticism in cases where more lives are at stake (and indeed, I do put a bunch of probability on âaverting near-term extinction doesnât save astronomical value for some reason or anotherâ, though the reasons tend to be ones where we never actually had a shot of an astronomically big future in the first place, and I think that thatâs sort of the appropriate target for scepticism), but doesnât give you anything strong enough to be confident about things.
(I certainly wouldnât be surprised if Iâm somehow misunderstanding what youâre doing; Iâm just responding to the picture Iâm getting from what youâve written.)
Are there any interventions whose estimates of (posterior) counterfactual impact do not decay to 0 in at most a few centuries? From my perspective, their absence establishes a strong prior against persistent longterm effects.
This makes a lot of sense to me too.
In general our ability to measure long term effects is kind of lousy. But if I wanted to look for interventions which donât have that decay pattern it would be most natural to think of conservation work saving species from extinction. Once weâve lost biodiversity, itâs essentially gone (maybe taking millions of years to build up again naturally). Conservation work can stop that. And with rises in conservation work over time itâs quite plausible that early saving species wonât just lead to them going extinct slightly later, but being preserved indefinitely.
I was not clear above, but I meant (posterior) counterfactual impact under expected total hedonistic utilitarianism. Even if a species is counterfactually preserved indefinitely due to actions now, which I think would be very hard, I do not see how it would permanently increase wellbeing. In addition, I meant to ask for actual empirical evidence as opposed to hypothetical examples (e.g. of one species being saved and making an immortal conservationist happy indefinitely).
I think this is something where our ability to measure is just pretty bad, and in particular our ability to empirically detect whether the type of things that plausibly have long lasting counterfactual impacts actually do is pretty terrible.
I respond to that by saying âok I guess empirics arenât super helpful for the big picture question letâs try to build mechanistic understanding of things grounded wherever possible in empirics, as well as priors about what types of distributions occur when various different generating mechanisms are at playâ, whereas it sounds like youâre responding by saying something like âwell as a prior weâll just use the parts of the distribution we can actually measure, and assume that generalizes unless we get contradictory dataâ?
Yes, that would be my reply. Thanks for clarifying.
Yeah, so I basically think that that response feels âspiritually frequentistâ, and is more likely to lead you to large errors than the approach I outlined (which feels more âspiritually Bayesianâ), especially in cases like this where weâre trying to extrapolate significantly beyond the data weâve been able to gather.