I considered this argument when writing the post and had several responses although they’re a bit spread out (sorry I know it’s very long—I even deleted a bunch of arguments while editing to make it shorter). See the second half of this section, this section, and the commentary on some orgs (1, 2, 3).
In short:
If an org’s public outputs are low-quality (or even net harmful), and its defense is “I’m doing much better things in private where you can’t see them, trust me”, then why should I trust it? All available evidence indicates otherwise.
I generally oppose hiding your true beliefs on quasi-deontological grounds. (I’m a utilitarian but I think moral rules are useful.)
Openness appears to work, e.g. the FLI 6-month pause letter got widespread support.
It’s harder to achieve your goals if you’re obfuscating them.
Responding to only one minor point you made, the 6-month pause letter seems like the type of thing you oppose: it’s not able to help with the risk, it just does deceptive PR that aligns with the goals of pushing against AI progress, while getting support from those who disagree with their actual goal.
I considered this argument when writing the post and had several responses although they’re a bit spread out (sorry I know it’s very long—I even deleted a bunch of arguments while editing to make it shorter). See the second half of this section, this section, and the commentary on some orgs (1, 2, 3).
In short:
If an org’s public outputs are low-quality (or even net harmful), and its defense is “I’m doing much better things in private where you can’t see them, trust me”, then why should I trust it? All available evidence indicates otherwise.
I generally oppose hiding your true beliefs on quasi-deontological grounds. (I’m a utilitarian but I think moral rules are useful.)
Openness appears to work, e.g. the FLI 6-month pause letter got widespread support.
It’s harder to achieve your goals if you’re obfuscating them.
Responding to only one minor point you made, the 6-month pause letter seems like the type of thing you oppose: it’s not able to help with the risk, it just does deceptive PR that aligns with the goals of pushing against AI progress, while getting support from those who disagree with their actual goal.