“Indistinguishable from magic” is an Arthur C Clarke quote about “any sufficiently advanced technology”, and I think you’re underestimating the complexity of building a generation ship and keeping it operational for hundreds, possibly thousands of years in deep space. Propulsion is pretty low on the list of problems if you’re skipping FTL travel, though you’re not likely to cross the galaxy with a solar sail or a 237mN thruster using xenon as propellant. (FWIW I actually work in the space industry and spent the last week speaking with people about projects to extract oxygen from lunar regolith and assemble megastructures in microgravity, so it’s not like I’m just dismissing the entire problem space here)
I think that the distinction between killing all and killing most people is substantially less important than those people (and you?) believe.
I’m actually in agreement with that point, but more due to putting more weight on the first 8 billion than the hypothetical orders of magnitude more hypothetical future humans. (I think in a lot of catastrophe scenarios technological knowledge and ambition rebounds just fine eventually, possibly stronger)
This is an absurd claim.
Why is it absurd? If humans can solve the problem of sending a generation ship to Alpha Centurai, an intelligence smart (and malevolent) enough to destroy 8 billion humans in their natural environment surely isn’t going to be stymied by the complexities involved in sending some weapons after them or transmitting a copy of itself to their computers...
“Indistinguishable from magic” is an Arthur C Clarke quote about “any sufficiently advanced technology”, and I think you’re underestimating the complexity of building a generation ship and keeping it operational for hundreds, possibly thousands of years in deep space. Propulsion is pretty low on the list of problems if you’re skipping FTL travel, though you’re not likely to cross the galaxy with a solar sail or a 237mN thruster using xenon as propellant. (FWIW I actually work in the space industry and spent the last week speaking with people about projects to extract oxygen from lunar regolith and assemble megastructures in microgravity, so it’s not like I’m just dismissing the entire problem space here)
I’m actually in agreement with that point, but more due to putting more weight on the first 8 billion than the hypothetical orders of magnitude more hypothetical future humans. (I think in a lot of catastrophe scenarios technological knowledge and ambition rebounds just fine eventually, possibly stronger)
Why is it absurd? If humans can solve the problem of sending a generation ship to Alpha Centurai, an intelligence smart (and malevolent) enough to destroy 8 billion humans in their natural environment surely isn’t going to be stymied by the complexities involved in sending some weapons after them or transmitting a copy of itself to their computers...