In retrospect I should have made a clearer distinction between “things that the author thinks are good and which are mostly timeline-insensitive according to his model of how things work” and “things that all reasonable observers would agree are good ideas regardless of their timelines.” The stuff you mentioned mostly relates to currently-existing-AI-systems and management of their risks, and while not consensus-y, are mostly agreed on by people in the trenches of language model risks—for example, there is a lot of knowledge to share and which is being shared already about language model deployment best practices. And one needn’t invoke/think one way or the other about AGI to justify government intervention in managing risks of existing and near-term systems given the potential stakes of failure (e.g. collapse of the epistemic commons via scaled misuse of increasingly powerful language/image generation; reckless deployment of such systems in critical applications). Of course one might worry that intervening on those things will detract resources from other things, but my view, which I can’t really justify concisely here but happy to discuss in another venue, is that overwhelmingly the synergies outweigh the tradeoffs (e.g. there are big culture/norm benefits at the organizational and industry level—which will directly increase the likelihood of good AGI outcomes if the same orgs/people are involved—of being careful about current technologies compared to not doing so, even if the techniques themselves are very different).
Yeah, I’m specifically interested in AGI / ASI / “AI that could cause us to completely lose control of the future in the next decade or less”, and I’m more broadly interested in existential risk / things that could secure or burn the cosmic endowment. If I could request one thing, it would be clarity about when you’re discussing “acutely x-risky AI” (or something to that effect) versus other AI things; I care much more about that than about you flagging personal views vs. consensus views.
In retrospect I should have made a clearer distinction between “things that the author thinks are good and which are mostly timeline-insensitive according to his model of how things work” and “things that all reasonable observers would agree are good ideas regardless of their timelines.” The stuff you mentioned mostly relates to currently-existing-AI-systems and management of their risks, and while not consensus-y, are mostly agreed on by people in the trenches of language model risks—for example, there is a lot of knowledge to share and which is being shared already about language model deployment best practices. And one needn’t invoke/think one way or the other about AGI to justify government intervention in managing risks of existing and near-term systems given the potential stakes of failure (e.g. collapse of the epistemic commons via scaled misuse of increasingly powerful language/image generation; reckless deployment of such systems in critical applications). Of course one might worry that intervening on those things will detract resources from other things, but my view, which I can’t really justify concisely here but happy to discuss in another venue, is that overwhelmingly the synergies outweigh the tradeoffs (e.g. there are big culture/norm benefits at the organizational and industry level—which will directly increase the likelihood of good AGI outcomes if the same orgs/people are involved—of being careful about current technologies compared to not doing so, even if the techniques themselves are very different).
Yeah, I’m specifically interested in AGI / ASI / “AI that could cause us to completely lose control of the future in the next decade or less”, and I’m more broadly interested in existential risk / things that could secure or burn the cosmic endowment. If I could request one thing, it would be clarity about when you’re discussing “acutely x-risky AI” (or something to that effect) versus other AI things; I care much more about that than about you flagging personal views vs. consensus views.