A friend in AI Governance just shared this post with me.
I was blunt in my response, which I will share below:
~ ~ ~
Two cruxes for this post:
Is aligning AGI to be long-term safe even slightly possible – practically given default AI scaled training and deployment trends and complexity of the problem (see Yudkowsky’s list of AGI lethalities) or theoretically given strict controllability limits (Yampolskiy) and uncontrollable substrate-needs convergence (Landry).
If clearly, pre-aligning AGI to not cause a mass extinction is not even slightly possible, then IMO splitting hairs about “access to good data that might help with alignment” is counterproductive.
Is a “richer technological world” worth the extent to which corporations are going to automate away our ability to make our own choices (starting with our own data), the increasing destabilisation of society, and the toxic environmental effects of automating technological growth?
These are essentially rhetorical questions, but covers the points I would ask someone who proposes desisting from collaborating with other groups who notice related harms and risks of corporations scaling AI.
To be honest, the reasoning in this post seems rather motivated without examination of underlying premises.
These sentences particularly:
“A world that restricts compute will end up with different AGI than a world that restricts data. While some constraints are out of our control — such as the difficulty of finding certain algorithms — other constraints aren’t. Therefore, it’s critical that we craft these constraints carefully, to ensure the trajectory of AI development goes well.
Passing subpar regulations now — the type of regulations not explicitly designed to provide favorable differential technological progress — might lock us into bad regime.”
It assumes AGI is inevitable, and therefore we should be picky about how we constrain developments towards AGI.
It also implicitly assumes that continued corporate scaling of AI counts as positive “progress” – at least for the kind of world they imagine would result and want to live in.
The tone also comes across as uncharitable. As if they are talking down at others they have not spent time trying to listen carefully to, take the perspective of, and paraphrase back their reasoning to (at least nothing is written about/from those attempts in the post).
Frankly, we cannot be held back by motivated techno-utopian arguments from taking collective action against exponentially increasing harms and risks (in extents of the scale and local impacts). We need to work with other groups to make traction.
A friend in AI Governance just shared this post with me.
I was blunt in my response, which I will share below:
~ ~ ~
Two cruxes for this post:
Is aligning AGI to be long-term safe even slightly possible – practically given default AI scaled training and deployment trends and complexity of the problem (see Yudkowsky’s list of AGI lethalities) or theoretically given strict controllability limits (Yampolskiy) and uncontrollable substrate-needs convergence (Landry).
If clearly, pre-aligning AGI to not cause a mass extinction is not even slightly possible, then IMO splitting hairs about “access to good data that might help with alignment” is counterproductive.
Is a “richer technological world” worth the extent to which corporations are going to automate away our ability to make our own choices (starting with our own data), the increasing destabilisation of society, and the toxic environmental effects of automating technological growth?
These are essentially rhetorical questions, but covers the points I would ask someone who proposes desisting from collaborating with other groups who notice related harms and risks of corporations scaling AI.
To be honest, the reasoning in this post seems rather motivated without examination of underlying premises.
These sentences particularly:
“A world that restricts compute will end up with different AGI than a world that restricts data. While some constraints are out of our control — such as the difficulty of finding certain algorithms — other constraints aren’t. Therefore, it’s critical that we craft these constraints carefully, to ensure the trajectory of AI development goes well. Passing subpar regulations now — the type of regulations not explicitly designed to provide favorable differential technological progress — might lock us into bad regime.”
It assumes AGI is inevitable, and therefore we should be picky about how we constrain developments towards AGI.
It also implicitly assumes that continued corporate scaling of AI counts as positive “progress” – at least for the kind of world they imagine would result and want to live in.
The tone also comes across as uncharitable. As if they are talking down at others they have not spent time trying to listen carefully to, take the perspective of, and paraphrase back their reasoning to (at least nothing is written about/from those attempts in the post).
Frankly, we cannot be held back by motivated techno-utopian arguments from taking collective action against exponentially increasing harms and risks (in extents of the scale and local impacts). We need to work with other groups to make traction.
~ ~ ~