The idea that developing asteroid deflection technology is good is so entrenched in popular opinion that it seems like arguing for less or no spending in the area might be a bad idea. This seems like a similar situation to where AI safety researchers find themselves. Advocating for less funding and development of AI seems relatively intractable, so they instead work on solutions to make AI safer. Another similar example is that of pandemics research – it has obvious benefits in building resilience to natural pandemics, but may also enable a malicious or accidental outbreak of an engineered pathogen.
I’m not sure about this. I don’t think I’ve ever heard about the idea that asteroid deflection technology would be good (or even about such technology at all) outside of EA. In contrast, potential benefits from AI are discussed widely, as are potential benefits from advanced medicine (and then to a lesser extent biotech advancements, and then maybe slightly pandemics research).
So I’m not sure if there is even widespread awareness of asteroid deflection technology, let alone entrenched views that it’d be good. This might mean pushing for differential progress in relation to this tech would be more tractable than that paragraph implies.
When I say that the idea is entrenched in popular opinion, I’m mostly referring to people in the space science/​engineering fields—either as workers, researchers or enthusiasts. This is anecdotal based on my experience as a PhD candidate in space science. In the broader public, I think you’d be right that people would think about it much less, however the researchers and the policy makers are the ones you’d need to convince for something like this, in my view.
Ah, that makes sense, then. And I’d also guess that researchers and policy makers are the main people that would need to be convinced.
But that might also be partly because the general public probably doesn’t think about this much or have a very strong/​solidified opinion; that might make it easier for researchers and policy makers to act in either direction without worrying about popular opinion, and mean this can be a case of pulling the rope sideways. So influencing the development of asteroid deflection technology might still be more tractable in that particular regard than influencing AI development, since there’s a smaller set of minds needing changing. (Though I’d still prioritise AI anyway due to the seemingly much greater probability of extreme outcomes there.)
I should also caveat that I don’t know much at all about the asteroid deflection space.
I’m not sure about this. I don’t think I’ve ever heard about the idea that asteroid deflection technology would be good (or even about such technology at all) outside of EA. In contrast, potential benefits from AI are discussed widely, as are potential benefits from advanced medicine (and then to a lesser extent biotech advancements, and then maybe slightly pandemics research).
So I’m not sure if there is even widespread awareness of asteroid deflection technology, let alone entrenched views that it’d be good. This might mean pushing for differential progress in relation to this tech would be more tractable than that paragraph implies.
When I say that the idea is entrenched in popular opinion, I’m mostly referring to people in the space science/​engineering fields—either as workers, researchers or enthusiasts. This is anecdotal based on my experience as a PhD candidate in space science. In the broader public, I think you’d be right that people would think about it much less, however the researchers and the policy makers are the ones you’d need to convince for something like this, in my view.
Ah, that makes sense, then. And I’d also guess that researchers and policy makers are the main people that would need to be convinced.
But that might also be partly because the general public probably doesn’t think about this much or have a very strong/​solidified opinion; that might make it easier for researchers and policy makers to act in either direction without worrying about popular opinion, and mean this can be a case of pulling the rope sideways. So influencing the development of asteroid deflection technology might still be more tractable in that particular regard than influencing AI development, since there’s a smaller set of minds needing changing. (Though I’d still prioritise AI anyway due to the seemingly much greater probability of extreme outcomes there.)
I should also caveat that I don’t know much at all about the asteroid deflection space.