Fortunately, the existential risks posed by AI are recognized by many close to President-elect Donald Trump. His daughter Ivanka seems to see the urgency of the problem. Elon Musk, a critical Trump backer, has been outspoken about the civilizational risks for many years, and recently supported Californiaās legislative push to safety-test AI. Even the right-wing Tucker Carlson provided common-sense commentary when he said: āSo I donāt know why weāre sitting back and allowing this to happen, if we really believe it will extinguish the human race or enslave the human race. Like, how can that be good?ā For his part, Trump has expressed concern about the risks posed by AI, too.
This is a strange contrast from the rest of the article, considering both Donald and Ivanka Trumpās positions are largely informed by the āsituational awarenessā position arguing that the US should develop AGI before China to ensure US victory over China ā which is explicitly the position Tegmark and Leahy argue against (and consider existentially harmful) when they call to stop work on AGI and work on international co-operation to restrict it and develop tool AI instead.
I still see this kind of confusion between the two positions a fair bit and it is extremely strange. Itās like if back in the original Cold War people couldnāt tell the difference between anti-communist hawks and the Bulletin of the Atomic Scientists (let alone anti-war hippies) because technically they both considered nuclear arms race to be very important for the future of humanity.
Iām aware and I donāt disagree. However, in xrisk, many (not all) of those who are most worried are also most bullish about capabilities. Reversely, many (not all) who are not worried are unimpressed with capabilities. Being aware of the concept of AGI, that it may be coming soon, and of how impactful it could be, is in practice often a first step towards becoming concerned about the risks, too. This is not true for everyone unfortunately. Still, I would say that at least for our chances to get an international treaty passed, it is perhaps hopeful that the power of AGI is on the radar of leading politicians (although this may also increase risk through other paths).
I donāt think thatās true at all. The effective accelerationists and the (to coin a term) AI hawks are major factions in the conflict over AI. I think you could argue they arenāt bullish enough about the full extent of the capabilities of AGI (except for the minority of extinctionist Landians, this is partly true) ā in which case the Trumps arenāt bullish enough either. As @Garrison noted here, prominent Republicans like Ted Cruz and JD Vance himself are already explicitly hostile to AI safety.
This is a strange contrast from the rest of the article, considering both Donald and Ivanka Trumpās positions are largely informed by the āsituational awarenessā position arguing that the US should develop AGI before China to ensure US victory over China ā which is explicitly the position Tegmark and Leahy argue against (and consider existentially harmful) when they call to stop work on AGI and work on international co-operation to restrict it and develop tool AI instead.
I still see this kind of confusion between the two positions a fair bit and it is extremely strange. Itās like if back in the original Cold War people couldnāt tell the difference between anti-communist hawks and the Bulletin of the Atomic Scientists (let alone anti-war hippies) because technically they both considered nuclear arms race to be very important for the future of humanity.
Iām aware and I donāt disagree. However, in xrisk, many (not all) of those who are most worried are also most bullish about capabilities. Reversely, many (not all) who are not worried are unimpressed with capabilities. Being aware of the concept of AGI, that it may be coming soon, and of how impactful it could be, is in practice often a first step towards becoming concerned about the risks, too. This is not true for everyone unfortunately. Still, I would say that at least for our chances to get an international treaty passed, it is perhaps hopeful that the power of AGI is on the radar of leading politicians (although this may also increase risk through other paths).
I donāt think thatās true at all. The effective accelerationists and the (to coin a term) AI hawks are major factions in the conflict over AI. I think you could argue they arenāt bullish enough about the full extent of the capabilities of AGI (except for the minority of extinctionist Landians, this is partly true) ā in which case the Trumps arenāt bullish enough either. As @Garrison noted here, prominent Republicans like Ted Cruz and JD Vance himself are already explicitly hostile to AI safety.