whereas I think the latter is astronomically lower.
Your argument doesn’t seem clearly laid out in the doc, but it sounds to me like your view is that there isn’t a “time of perils” and then sufficient technology for long run robustness.
I think you might find it useful to more clearly state your argument which seems very opaque in that linked document.
I disagree and think a time of perils seems quite likely given the potential for a singularity.
There is a bunch of discussion making this exact point in response to “Mistakes in the moral mathematics of existential risk” (which seems mostly mistaken to me via the mechanism implicitly putting astronomically low probability on robust intersteller civilizations).
It is unclear to me whether existential risk is higher than 10^-10 per year.
Causes of X-risk which seem vastly higher than this include:
AI takeover supposing you grant that AI control is less valuable.
Autocratic control supposing you grant that autocratic control of the long run future is less valuable.
I mostly think x-risk is mostly non-extinction and almost all the action is in changing which entities have control over resources rather than reducing astronomical waste.
Perhaps you adopt a view in which you don’t care at all what happens with long run resources so long as any group hypothetically has the ability to utilize these resources? Otherwise, given the potential for lock in, it seems like influencing who has control is vastly more important than you seem to be highlighting.
(My guess is that “no-entity ends up being in a position where they could hypothetically utilize long run resources” is about 300x lower than other x-risk (perhaps 0.1% vs 30% all cause x-risk) which is vastly higher than your estimate.)
I also put vaster higher probability than you on extinction due to incredibly powerful bioweapons or other future technology, but this isn’t most of my concern.
Your argument doesn’t seem clearly laid out in the doc, but it sounds to me like your view is that there isn’t a “time of perils” and then sufficient technology for long run robustness.
I am mainly sceptical of the possibility of making worlds with astronomical value significantly more likely, regardless of whether the longterm annual probability of value dropping a lot tends to 0 or not.
I think you might find it useful to more clearly state your argument which seems very opaque in that linked document.
I agree what I shared is not very clear, although I will probably post it roughly as is one of these days, and then eventually follow up.
I disagree and think a time of perils seems quite likely given the potential for a singularity.
It is unclear to me whether faster economic growth or technological progress imply a higher extinction risk. I would say this has generally been going down until now, except maybe from around 1939 (start of World War 2) to 1986 (when nuclear warheads peaked), although the fraction of people living in democracies increased 21.6 pp (= 0.156 + 0.183 - (0.0400 + 0.0833)) during this period.
There is a bunch of discussion making this exact point in response to “Mistakes in the moral mathematics of existential risk” (which seems mostly mistaken to me via the mechanism implicitly putting astronomically low probability on robust intersteller civilizations).
I agree the probability of intersteller civilizations and astronomically valuable futures more broadly should not be astronomically low. For example, I guess it is fine to assume a 1 % chance on each order of magnitude between 1 and 10^100 human lives of future value. This is not my best guess, but it is just to give you a sense than I think astronomically valuable futures are plausible. However, I guess it is very hard to increase the probability of the astronomically valuable worlds.
Causes of X-risk which seem vastly higher than this include:
AI takeover supposing you grant that AI control is less valuable.
Autocratic control supposing you grant that autocratic control of the long run future is less valuable.
I mostly think x-risk is mostly non-extinction and almost all the action is in changing which entities have control over resources rather than reducing astronomical waste.
I guess the probability of something like a global dictactorship by 2100 is many orders of magnitude higher than 10^-10, but I do not think it would be permanent. If it was, then I would guess the alternative would be worse.
Perhaps you adopt a view in which you don’t care at all what happens with long run resources so long as any group hypothetically has the ability to utilize these resources? Otherwise, given the potential for lock in, it seems like influencing who has control is vastly more important than you seem to be highlighting.
(My guess is that “no-entity ends up being in a position where they could hypothetically utilize long run resources” is about 300x lower than other x-risk (perhaps 0.1% vs 30% all cause x-risk) which is vastly higher than your estimate.)
There are many concept of existential risk, so I prefer to focus on probabilities of clearly defined situations. One could think about existential risk from risk R as the relative increase in the expected value of the future if risk R was totally mitigated, but this is super hard to estimate in a way that the results are informative. I currently think it is better to assess interventions based on standard cost-effectiveness analyses.
It is unclear to me whether faster economic growth or technological progress imply a higher extinction risk. I would say this has generally been going down until now
My view is that the majority of bad-things-happen-with the cosmic endowment risk is downstream of AI takeover.
I generally don’t think looking at historical case studies will be super informative here.
I agree that doing the singularity faster doesn’t make things worse, I’m just noting that you’ll go through a bunch of technology in a small amount of wall clock time.
Sure, but is the probability of it being permanent more like 0.05 or 10^-6? I would guess more like 0.05. (Given modern technology and particularly the possibility of AI and the singularity.)
It depends on the specific definition of global dictactorship and the number of years. However, the major problem is that I have very little to say about what will happen further than 100 years into the future other than thinking that whatever is happening will continue to change, and is not determined by what we do now.
By “permanent”, I mean >10 billion years. By “global”, I mean “it ‘controls’ >80% of resources under earth originating civilization control”. (Where control evolves with the extent to which technology allows for control.)
Thanks for clarifying! Based on that, and Wikipedia’s definition of dictactorship as “an autocratic form of government which is characterized by a leader, or a group of leaders, who hold governmental powers with few to no limitations”, I would say more like 10^-6. However, I do not think this matters, because that far into the future I would no longer be confident to say which form of government is better or worse.
I am mainly sceptical of the possibility of making worlds with astronomical value significantly more likely, regardless of whether the longterm annual probability of value dropping a lot tends to 0 or not.
As, in your argument is that you are skeptical on priors? I think I’m confused what the argument is here.
Separately, my view is that due to acausal trade, it’s very likely that changing from human control to AI control looks less like “making worlds with astronomical value more likely” and looks more like “shifting some resources across the entire continuous measure”. But, this mostly adds up to the same thing as creating astronomical value.
As, in your argument is that you are skeptical on priors? I think I’m confused what the argument is here.
Yes, mostly that. As far as I can tell, the (posterior) counterfactual impact of interventions whose effects can be accurately measured, like ones in global health and development, decays to 0 as time goes by, and can be modelled as increasing the value of the world for a few years or decades, far from astronomically.
Separately, my view is that due to acausal trade, it’s very likely that changing from human control to AI control looks less like “making worlds with astronomical value more likely” and looks more like “shifting some resources across the entire continuous measure”. But, this mostly adds up to the same thing as creating astronomical value.
I personally do not think acausal trade considerations are action relevant, but, if I was to think along those lines, I would assume there is way more stuff to be acausally influenced which is weakly correlated with what humans do than that is strongly correlated. So the probability of influencing more stuff acausally should still decrease with value, and I guess the decrease in the probability density would be faster than the increase in value, such that value density decreases with value. In this case, the expected value from astronomical acausal trades would still be super low.
Your argument doesn’t seem clearly laid out in the doc, but it sounds to me like your view is that there isn’t a “time of perils” and then sufficient technology for long run robustness.
I think you might find it useful to more clearly state your argument which seems very opaque in that linked document.
I disagree and think a time of perils seems quite likely given the potential for a singularity.
There is a bunch of discussion making this exact point in response to “Mistakes in the moral mathematics of existential risk” (which seems mostly mistaken to me via the mechanism implicitly putting astronomically low probability on robust intersteller civilizations).
Causes of X-risk which seem vastly higher than this include:
AI takeover supposing you grant that AI control is less valuable.
Autocratic control supposing you grant that autocratic control of the long run future is less valuable.
I mostly think x-risk is mostly non-extinction and almost all the action is in changing which entities have control over resources rather than reducing astronomical waste.
Perhaps you adopt a view in which you don’t care at all what happens with long run resources so long as any group hypothetically has the ability to utilize these resources? Otherwise, given the potential for lock in, it seems like influencing who has control is vastly more important than you seem to be highlighting.
(My guess is that “no-entity ends up being in a position where they could hypothetically utilize long run resources” is about 300x lower than other x-risk (perhaps 0.1% vs 30% all cause x-risk) which is vastly higher than your estimate.)
I also put vaster higher probability than you on extinction due to incredibly powerful bioweapons or other future technology, but this isn’t most of my concern.
I am mainly sceptical of the possibility of making worlds with astronomical value significantly more likely, regardless of whether the longterm annual probability of value dropping a lot tends to 0 or not.
I agree what I shared is not very clear, although I will probably post it roughly as is one of these days, and then eventually follow up.
It is unclear to me whether faster economic growth or technological progress imply a higher extinction risk. I would say this has generally been going down until now, except maybe from around 1939 (start of World War 2) to 1986 (when nuclear warheads peaked), although the fraction of people living in democracies increased 21.6 pp (= 0.156 + 0.183 - (0.0400 + 0.0833)) during this period.
I agree the probability of intersteller civilizations and astronomically valuable futures more broadly should not be astronomically low. For example, I guess it is fine to assume a 1 % chance on each order of magnitude between 1 and 10^100 human lives of future value. This is not my best guess, but it is just to give you a sense than I think astronomically valuable futures are plausible. However, I guess it is very hard to increase the probability of the astronomically valuable worlds.
I guess the probability of something like a global dictactorship by 2100 is many orders of magnitude higher than 10^-10, but I do not think it would be permanent. If it was, then I would guess the alternative would be worse.
I strongly endorse expected total hedonistic utilitarianism.
There are many concept of existential risk, so I prefer to focus on probabilities of clearly defined situations. One could think about existential risk from risk R as the relative increase in the expected value of the future if risk R was totally mitigated, but this is super hard to estimate in a way that the results are informative. I currently think it is better to assess interventions based on standard cost-effectiveness analyses.
My view is that the majority of bad-things-happen-with the cosmic endowment risk is downstream of AI takeover.
I generally don’t think looking at historical case studies will be super informative here.
I agree that doing the singularity faster doesn’t make things worse, I’m just noting that you’ll go through a bunch of technology in a small amount of wall clock time.
Sure, but is the probability of it being permanent more like 0.05 or 10^-6? I would guess more like 0.05. (Given modern technology and particularly the possibility of AI and the singularity.)
It depends on the specific definition of global dictactorship and the number of years. However, the major problem is that I have very little to say about what will happen further than 100 years into the future other than thinking that whatever is happening will continue to change, and is not determined by what we do now.
By “permanent”, I mean >10 billion years. By “global”, I mean “it ‘controls’ >80% of resources under earth originating civilization control”. (Where control evolves with the extent to which technology allows for control.)
Thanks for clarifying! Based on that, and Wikipedia’s definition of dictactorship as “an autocratic form of government which is characterized by a leader, or a group of leaders, who hold governmental powers with few to no limitations”, I would say more like 10^-6. However, I do not think this matters, because that far into the future I would no longer be confident to say which form of government is better or worse.
As, in your argument is that you are skeptical on priors? I think I’m confused what the argument is here.
Separately, my view is that due to acausal trade, it’s very likely that changing from human control to AI control looks less like “making worlds with astronomical value more likely” and looks more like “shifting some resources across the entire continuous measure”. But, this mostly adds up to the same thing as creating astronomical value.
Yes, mostly that. As far as I can tell, the (posterior) counterfactual impact of interventions whose effects can be accurately measured, like ones in global health and development, decays to 0 as time goes by, and can be modelled as increasing the value of the world for a few years or decades, far from astronomically.
I personally do not think acausal trade considerations are action relevant, but, if I was to think along those lines, I would assume there is way more stuff to be acausally influenced which is weakly correlated with what humans do than that is strongly correlated. So the probability of influencing more stuff acausally should still decrease with value, and I guess the decrease in the probability density would be faster than the increase in value, such that value density decreases with value. In this case, the expected value from astronomical acausal trades would still be super low.