I can see six strategies which may prove fruitful to nourish differential technological progress:
1) Increasing safety savvy insight—e.g. helping with the AI control problem, perfecting our understanding of moral psychology, and of cultural evolution.
2) Decreasing rates of progress in dangerous areas—e.g. decelerating the brain emulation project, passing legislation anti-AI progress etc...
3) Diminishing the incentives which stimulate progress in undesirable areas—e.g. if aging is cured, chronologically old individuals no longer would have an incentive to accelerate the intelligence explosion.
4) Using a Scorched Earth strategy—e.g. if surpassing a threshold of average wealth would be risky, perhaps because some individuals would have enough power start a world-wrecking cascade, then burning resources now would guarantee that the future will be safer by being poorer.
5) Using a differentiated Scorched Earth strategy—e.g. try to predict which assets/valuables will be in the hands of those individuals who can start world destroying cascades in the future, and destroy non-liquid resources from that pool now. Whether non-liquid because rare, as enriched uranium, or non-liquid because hard to exchange, as private islands or old churches, the crucial consideration is whether destroying these resources now will counterfactually impede similar resources of being used at the time of their maximum destructive potential by these agents.
6) Finding whether there are more strategies not considered among the previous 5.
Which of these, if any, do you see as the lower hanging fruit for EAs? and why?
1) Seems like a good start, as its likely to draw together people with a common concern, but its v unlikely to stop things. Research capabilities are dispersed, and if you’re saying that the stepping on each other’s toes effect is going to outweigh the standing on the shoulders of giants effect, then researchers in different parts of world society with different agendas will want to get going with it. It would need enough of the right people to subscribe to this for log enough to prevent the technology. Unlikely. But might buy time.
Differentiated scorched earth isn’t different in consequence than 2) - both are kinds of regulation. One official and centralised, one with the possibility of being unofficial and dispersed. The drawback of the scorched earth strategy is that it’s irreversable. The drawback of the regulatory strategy is that its game-able.
curing aging might be one of the threats to humanity as we know it? Mortality = vulnerability = dependence on others = good society? Also not dependable strategies to reduce incentives if they’re reliant on new tech fixes, as tech fixes are very hard to predict or encourage and possibly carry their own unknown risks even if we could do those things (so there’s a level at which you could be introducing more risk than risk control and its hard to figure out which is which)
I personally think a harm control strategy / centralised regulatory + intelligence function is our best bet for differentiated progress. This also comes with the side-benefit of forcing debate and norms in scientific research to allign or not and the public being brought in on it, and they’re usually in favour of regulating against scary risks even when they’re too small (unless they’re framed as protecting us from other human beings).
The argument for differential progress has been made before by Bostrom a few times and Beckstead as well.
http://www.existential-risk.org/figure5.png
I can see six strategies which may prove fruitful to nourish differential technological progress:
1) Increasing safety savvy insight—e.g. helping with the AI control problem, perfecting our understanding of moral psychology, and of cultural evolution.
2) Decreasing rates of progress in dangerous areas—e.g. decelerating the brain emulation project, passing legislation anti-AI progress etc...
3) Diminishing the incentives which stimulate progress in undesirable areas—e.g. if aging is cured, chronologically old individuals no longer would have an incentive to accelerate the intelligence explosion.
4) Using a Scorched Earth strategy—e.g. if surpassing a threshold of average wealth would be risky, perhaps because some individuals would have enough power start a world-wrecking cascade, then burning resources now would guarantee that the future will be safer by being poorer.
5) Using a differentiated Scorched Earth strategy—e.g. try to predict which assets/valuables will be in the hands of those individuals who can start world destroying cascades in the future, and destroy non-liquid resources from that pool now. Whether non-liquid because rare, as enriched uranium, or non-liquid because hard to exchange, as private islands or old churches, the crucial consideration is whether destroying these resources now will counterfactually impede similar resources of being used at the time of their maximum destructive potential by these agents.
6) Finding whether there are more strategies not considered among the previous 5.
Which of these, if any, do you see as the lower hanging fruit for EAs? and why?
1) Seems like a good start, as its likely to draw together people with a common concern, but its v unlikely to stop things. Research capabilities are dispersed, and if you’re saying that the stepping on each other’s toes effect is going to outweigh the standing on the shoulders of giants effect, then researchers in different parts of world society with different agendas will want to get going with it. It would need enough of the right people to subscribe to this for log enough to prevent the technology. Unlikely. But might buy time.
Differentiated scorched earth isn’t different in consequence than 2) - both are kinds of regulation. One official and centralised, one with the possibility of being unofficial and dispersed. The drawback of the scorched earth strategy is that it’s irreversable. The drawback of the regulatory strategy is that its game-able.
curing aging might be one of the threats to humanity as we know it? Mortality = vulnerability = dependence on others = good society? Also not dependable strategies to reduce incentives if they’re reliant on new tech fixes, as tech fixes are very hard to predict or encourage and possibly carry their own unknown risks even if we could do those things (so there’s a level at which you could be introducing more risk than risk control and its hard to figure out which is which)
I personally think a harm control strategy / centralised regulatory + intelligence function is our best bet for differentiated progress. This also comes with the side-benefit of forcing debate and norms in scientific research to allign or not and the public being brought in on it, and they’re usually in favour of regulating against scary risks even when they’re too small (unless they’re framed as protecting us from other human beings).