I really liked this post and would be extremely happy to see more of those, especially if there is substantial disagreement.
To do some small pushback on policy orgs that seem to do “vacuous” reports: when I once complained about the same on the side of some anti-safety advocates, someone quoted me this exact passage from Harry Potter and the Order of the Phoenix (vanilla, not the fanfic):
[Umbridge makes an abstract sounding speech] [Hermione said] ‘It explained a lot. ″Did it?’ said Harry in surprise. ‘Sounded like a load of waffle to me.‘ There was some important stuff hidden in the waffle,’ said Hermione grimly. ‘Was there?’ said Ron blankly. ‘How about: “progress for progress’s sake must be discouraged”? How about: “pruning wherever we find practices that ought to be prohibited”? ″Well, what does that mean?’ said Ron impatiently. ‘I’ll tell you what it means,’ said Hermione through gritted teeth. ‘It means the Ministry’s interfering at Hogwarts.’
I’ve discussed with people in “big instances” and they pretty much confirmed my suspicion. Officials who play the “real game” have two channels of communication. There is the “face”, often a mandated role that they haven’t chosen, meant to represent interests that can (and sometimes do) conflict with their. Then there is the “real person”, with its political opinions, and nationality.
The two are articulated together through means of careful word selection and PR management. Except behind closed doors, actors will not give the “person’s” reasons for doing something, as this could lead to serious trouble (the media alone is enough to be a threat). They will generate rationalizations in line with the “face” that, in some instances, may suspiciously aline with the “person’s” reason, and in some other instances, could serve as dogwhistles. However, their interlocutor is usually aware that they are rationalizations, and will push back with other rationalizations. There is, to some extent, a real person-to-person exchange, and I expect orgs that are good at this game to appear vacuous from the outside.
There are exceptions to this strategy, of course (think Donald Trump, Mr Rogers, or, for a very French example, Elise Lucet). Yet even those exceptions are not naive and take for granted that some degree of hypocrisy is being displayed by the counterpart.
It might be that most communication on X-risk really is happening, it’s just happening in Umbridgese. This may be a factor you’ve already taken in consideration, however.
I considered this argument when writing the post and had several responses although they’re a bit spread out (sorry I know it’s very long—I even deleted a bunch of arguments while editing to make it shorter). See the second half of this section, this section, and the commentary on some orgs (1, 2, 3).
In short:
If an org’s public outputs are low-quality (or even net harmful), and its defense is “I’m doing much better things in private where you can’t see them, trust me”, then why should I trust it? All available evidence indicates otherwise.
I generally oppose hiding your true beliefs on quasi-deontological grounds. (I’m a utilitarian but I think moral rules are useful.)
Openness appears to work, e.g. the FLI 6-month pause letter got widespread support.
It’s harder to achieve your goals if you’re obfuscating them.
Responding to only one minor point you made, the 6-month pause letter seems like the type of thing you oppose: it’s not able to help with the risk, it just does deceptive PR that aligns with the goals of pushing against AI progress, while getting support from those who disagree with their actual goal.
I really liked this post and would be extremely happy to see more of those, especially if there is substantial disagreement.
To do some small pushback on policy orgs that seem to do “vacuous” reports: when I once complained about the same on the side of some anti-safety advocates, someone quoted me this exact passage from Harry Potter and the Order of the Phoenix (vanilla, not the fanfic):
I’ve discussed with people in “big instances” and they pretty much confirmed my suspicion. Officials who play the “real game” have two channels of communication. There is the “face”, often a mandated role that they haven’t chosen, meant to represent interests that can (and sometimes do) conflict with their. Then there is the “real person”, with its political opinions, and nationality.
The two are articulated together through means of careful word selection and PR management. Except behind closed doors, actors will not give the “person’s” reasons for doing something, as this could lead to serious trouble (the media alone is enough to be a threat). They will generate rationalizations in line with the “face” that, in some instances, may suspiciously aline with the “person’s” reason, and in some other instances, could serve as dogwhistles. However, their interlocutor is usually aware that they are rationalizations, and will push back with other rationalizations. There is, to some extent, a real person-to-person exchange, and I expect orgs that are good at this game to appear vacuous from the outside.
There are exceptions to this strategy, of course (think Donald Trump, Mr Rogers, or, for a very French example, Elise Lucet). Yet even those exceptions are not naive and take for granted that some degree of hypocrisy is being displayed by the counterpart.
It might be that most communication on X-risk really is happening, it’s just happening in Umbridgese. This may be a factor you’ve already taken in consideration, however.
I considered this argument when writing the post and had several responses although they’re a bit spread out (sorry I know it’s very long—I even deleted a bunch of arguments while editing to make it shorter). See the second half of this section, this section, and the commentary on some orgs (1, 2, 3).
In short:
If an org’s public outputs are low-quality (or even net harmful), and its defense is “I’m doing much better things in private where you can’t see them, trust me”, then why should I trust it? All available evidence indicates otherwise.
I generally oppose hiding your true beliefs on quasi-deontological grounds. (I’m a utilitarian but I think moral rules are useful.)
Openness appears to work, e.g. the FLI 6-month pause letter got widespread support.
It’s harder to achieve your goals if you’re obfuscating them.
Responding to only one minor point you made, the 6-month pause letter seems like the type of thing you oppose: it’s not able to help with the risk, it just does deceptive PR that aligns with the goals of pushing against AI progress, while getting support from those who disagree with their actual goal.