I see myself as straddling the line between the two communities. More rigorous arguments at the end, but first, my offhand impressions of what I think the median EA/XR person beliefs:
Ignoring XR, economic/technological progress is an immense moral good
Considering XR, economic progress is somewhat good, neutral at worst
The solution to AI risk is not “put everything on hold until we make epistemic progress”
The solution to AI risk is to develop safe AI
In the meantime, we should be cautious of specific kinds of development, but it’s fine if someone wants to go and improve crop yields or whatever
As Bostrom wrote in 2003:
“In light of the above discussion, it may seem as if a utilitarian ought to focus her efforts on accelerating technological development. The payoff from even a very slight success in this endeavor is so enormous that it dwarfs that of almost any other activity. We appear to have a utilitarian argument for the greatest possible urgency of technological development.”
“However, the true lesson is a different one. If what we are concerned with is (something like) maximizing the expected number of worthwhile lives that we will create, then in addition to the opportunity cost of delayed colonization, we have to take into account the risk of failure to colonize at all. We might fall victim to an existential risk, one where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.[8] Because the lifespan of galaxies is measured in billions of years, whereas the time-scale of any delays that we could realistically affect would rather be measured in years or decades, the consideration of risk trumps the consideration of opportunity cost. For example, a single percentage point of reduction of existential risks would be worth (from a utilitarian expected utility point-of-view) a delay of over 10 million years.”
https://www.nickbostrom.com/astronomical/waste.html
Thanks ADS. I’m pretty close to agreeing with all those bullet points actually?
I wonder if, to really get to the crux, we need to outline what are the specific steps, actions, programs, investments, etc. that EA/XR and PS would disagree on. “Develop safe AI” seems totally consistent with PS, as does “be cautious of specific types of development”, although both of those formulations are vague/general.
Re Bostrom:
a single percentage point of reduction of existential risks would be worth (from a utilitarian expected utility point-of-view) a delay of over 10 million years.
By the same logic, would a 0.001% reduction in XR be worth a delay of 10,000 years? Because that seems like the kind of Pascal’s Mugging I was talking about.
(Also for what it’s worth, I think I’m more sympathetic to the “person-affecting utilitarian” view that Bostrom outlines in the last section of that paper—which may be why I learn more towards speed on the speed/safety tradeoff, and why my view might change if we already had immortality. I wonder if this is the crux?)
Side note: Bostrom does not hold or argue for 100% weight on total utilitarianism such as to take overwhelming losses on other views for tiny gains on total utilitarian stances. In Superintelligence he specifically rejects an example extreme tradeoff of that magnitude (not reserving one galaxy’s worth of resources out of millions for humanity/existing beings even if posthumans would derive more wellbeing from a given unit of resources).
I also wouldn’t actually accept a 10 million year delay in tech progress (and the death of all existing beings who would otherwise have enjoyed extended lives from advanced tech, etc) for a 0.001% reduction in existential risk.
In the abstract, yes, I would trade 10,000 years for 0.001% reduction in XR.
In practice, I think the problem with this kind of Pascal Mugging argument is that it’s really hard to know what a 0.001% reduction looks like, and really easy to do some fuzzy Fermi estimate math. If someone were to say “please give me one billion dollars, I have this really good idea to prevent XR by pursuing Strategy X”, they could probably convince me that they have at least a 0.001% chance of succeeding. So my objections to really small probabilities are mostly practical.
I see myself as straddling the line between the two communities. More rigorous arguments at the end, but first, my offhand impressions of what I think the median EA/XR person beliefs:
Ignoring XR, economic/technological progress is an immense moral good
Considering XR, economic progress is somewhat good, neutral at worst
The solution to AI risk is not “put everything on hold until we make epistemic progress”
The solution to AI risk is to develop safe AI
In the meantime, we should be cautious of specific kinds of development, but it’s fine if someone wants to go and improve crop yields or whatever
As Bostrom wrote in 2003: “In light of the above discussion, it may seem as if a utilitarian ought to focus her efforts on accelerating technological development. The payoff from even a very slight success in this endeavor is so enormous that it dwarfs that of almost any other activity. We appear to have a utilitarian argument for the greatest possible urgency of technological development.”
“However, the true lesson is a different one. If what we are concerned with is (something like) maximizing the expected number of worthwhile lives that we will create, then in addition to the opportunity cost of delayed colonization, we have to take into account the risk of failure to colonize at all. We might fall victim to an existential risk, one where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.[8] Because the lifespan of galaxies is measured in billions of years, whereas the time-scale of any delays that we could realistically affect would rather be measured in years or decades, the consideration of risk trumps the consideration of opportunity cost. For example, a single percentage point of reduction of existential risks would be worth (from a utilitarian expected utility point-of-view) a delay of over 10 million years.” https://www.nickbostrom.com/astronomical/waste.html
With regards to poverty reduction, you might also like this post in favor of growth: http://reflectivedisequilibrium.blogspot.com/2018/10/flow-through-effects-of-innovation.html
Thanks ADS. I’m pretty close to agreeing with all those bullet points actually?
I wonder if, to really get to the crux, we need to outline what are the specific steps, actions, programs, investments, etc. that EA/XR and PS would disagree on. “Develop safe AI” seems totally consistent with PS, as does “be cautious of specific types of development”, although both of those formulations are vague/general.
Re Bostrom:
By the same logic, would a 0.001% reduction in XR be worth a delay of 10,000 years? Because that seems like the kind of Pascal’s Mugging I was talking about.
(Also for what it’s worth, I think I’m more sympathetic to the “person-affecting utilitarian” view that Bostrom outlines in the last section of that paper—which may be why I learn more towards speed on the speed/safety tradeoff, and why my view might change if we already had immortality. I wonder if this is the crux?)
Side note: Bostrom does not hold or argue for 100% weight on total utilitarianism such as to take overwhelming losses on other views for tiny gains on total utilitarian stances. In Superintelligence he specifically rejects an example extreme tradeoff of that magnitude (not reserving one galaxy’s worth of resources out of millions for humanity/existing beings even if posthumans would derive more wellbeing from a given unit of resources).
I also wouldn’t actually accept a 10 million year delay in tech progress (and the death of all existing beings who would otherwise have enjoyed extended lives from advanced tech, etc) for a 0.001% reduction in existential risk.
Good to hear!
In the abstract, yes, I would trade 10,000 years for 0.001% reduction in XR.
In practice, I think the problem with this kind of Pascal Mugging argument is that it’s really hard to know what a 0.001% reduction looks like, and really easy to do some fuzzy Fermi estimate math. If someone were to say “please give me one billion dollars, I have this really good idea to prevent XR by pursuing Strategy X”, they could probably convince me that they have at least a 0.001% chance of succeeding. So my objections to really small probabilities are mostly practical.