Yeah, I’m specifically interested in AGI / ASI / “AI that could cause us to completely lose control of the future in the next decade or less”, and I’m more broadly interested in existential risk / things that could secure or burn the cosmic endowment. If I could request one thing, it would be clarity about when you’re discussing “acutely x-risky AI” (or something to that effect) versus other AI things; I care much more about that than about you flagging personal views vs. consensus views.
Yeah, I’m specifically interested in AGI / ASI / “AI that could cause us to completely lose control of the future in the next decade or less”, and I’m more broadly interested in existential risk / things that could secure or burn the cosmic endowment. If I could request one thing, it would be clarity about when you’re discussing “acutely x-risky AI” (or something to that effect) versus other AI things; I care much more about that than about you flagging personal views vs. consensus views.