Current AI projects will easily accelerate present trends toward a far higher likelihood of existential catastrophic events. Risks are multiplied by the many uncoordinated global AI projects and their various experimental applications, particularly genetic engineering in less fastidiously scientific jurisdictions; but also the many social, political, and military applications of misaligned AI. AI safety work would be well-intentioned but irrelevant as these genies won’t be put back into every AI ‘safety bottle’. Optimistically, as we have survived our existential risks for quite some time, we may yet find a means to survive the Great Filter of Civilizations challenge presented by Fermi’s Paradox.
Current AI projects will easily accelerate present trends toward a far higher likelihood of existential catastrophic events. Risks are multiplied by the many uncoordinated global AI projects and their various experimental applications, particularly genetic engineering in less fastidiously scientific jurisdictions; but also the many social, political, and military applications of misaligned AI. AI safety work would be well-intentioned but irrelevant as these genies won’t be put back into every AI ‘safety bottle’. Optimistically, as we have survived our existential risks for quite some time, we may yet find a means to survive the Great Filter of Civilizations challenge presented by Fermi’s Paradox.