1-4 is only unreasonable because you’ve written a strawman version of 4. Here is a version that makes total sense:
1. You make a superficially compelling argument for invading Iraq
2. A similar argument, if you squint, can be used to support invading Vietnam
3. This argument for invading vietnam was wrong because it made mistakes X, Y, and Z
4. Your argument for invading Iraq also makes mistakes X, Y and Z
5. Therefore, your argument is also wrong.
Steps 1-3 are not strictly necessary here, but they add supporting evidence to the claims.
As far as I can tell from the article, they are saying that you can make a counting argument that argues that it’s impossible to make a working SGD model. They are using this a jumping off point to explain the mistakes that would lead to flawed counting arguments, and then they spend the rest of the article trying to prove that the AI misalignment counting argument is making these same mistakes.
You can disagree with whether or not they have actually proved that AI misalignment made a comparable mistake, but that’s a different problem to the one you claim is going on here.
1-4 is only unreasonable because you’ve written a strawman version of 4. Here is a version that makes total sense:
1. You make a superficially compelling argument for invading Iraq
2. A similar argument, if you squint, can be used to support invading Vietnam
3. This argument for invading vietnam was wrong because it made mistakes X, Y, and Z
4. Your argument for invading Iraq also makes mistakes X, Y and Z
5. Therefore, your argument is also wrong.
Steps 1-3 are not strictly necessary here, but they add supporting evidence to the claims.
As far as I can tell from the article, they are saying that you can make a counting argument that argues that it’s impossible to make a working SGD model. They are using this a jumping off point to explain the mistakes that would lead to flawed counting arguments, and then they spend the rest of the article trying to prove that the AI misalignment counting argument is making these same mistakes.
You can disagree with whether or not they have actually proved that AI misalignment made a comparable mistake, but that’s a different problem to the one you claim is going on here.