Mostly no, I think: an AI not capable of taking over without descriptions of AI takeover isn’t much more threatening after it reads a few descriptions of AI takeover. (In fact, most such descriptions are not optimized for realism, so I think it’s not even clear that AI reading such descriptions is net negative! The marginal description from this community is probably reasonably realistic, but lacks details on the hard steps, because we don’t know how to do them.) That said, if you think of an excellent tactic to help an AI take over the world, publishing it could cause a rogue AI to learn about it.
I endorse all of this, but would add that the AI in the last sentence “learning about it” still wouldn’t actually matter, and hence it’s not “mostly no” but rather just “no”.
Mostly no, I think: an AI not capable of taking over without descriptions of AI takeover isn’t much more threatening after it reads a few descriptions of AI takeover. (In fact, most such descriptions are not optimized for realism, so I think it’s not even clear that AI reading such descriptions is net negative! The marginal description from this community is probably reasonably realistic, but lacks details on the hard steps, because we don’t know how to do them.) That said, if you think of an excellent tactic to help an AI take over the world, publishing it could cause a rogue AI to learn about it.
I endorse all of this, but would add that the AI in the last sentence “learning about it” still wouldn’t actually matter, and hence it’s not “mostly no” but rather just “no”.