Recently, I’ve been part of a small team that is working on the risks posed by technologies that allow humans to steer asteroids (opening the possibility of deliberately striking the Earth). We presented some of these results in a poster at EA Global SF 2019.
At the moment, we’re expanding this work into a paper. My current position is that this is an interesting and noteworthy technological risk that is (probably) strictly less dangerous/powerful than AI, but working on it can be useful. My reasons include: mitigating a risk that is largely orthogonal to AI is still useful; succeeding at preemptive regulation of a technological risk would improve our ability to do it for more difficult cases (e.g., AI); and popularizing the X-risk concept effectively via a more concrete/non-abstract manifestation than the more abstract risks from technologies like AI/biotech (most people understand the prevailing theory of the extinction of the dinosaurs and can somewhat easily imagine such a disaster in the future).
Thanks for this piece, I thought it was interesting!
A small error I noticed while reading through one of the references is that the line “For example, France’s GDP per capita is around 60% of US GDP per capita.[7]” is incorrectly summarizing the cited material. The value needs to be 67% to make this sentence correct. The relevant section in the underlying material is: “As an example, suppose we wish to compare living standards in France and the United States. GDP per person is markedly lower in France: France had a per capita GDP in 2005 of just 67 percent of the U.S. value. Consumption per person in France was even lower — only 60 percent of the U.S., even adding government consumption to private consumption.”