AI safety has important potential backfire risks, like accelerating capabilities (or causing others to, intentionally or not), worsening differential progress, backfire s-risks. I know less about biorisk, but there are infohazards there, so that bringing more attention to biorisk can also increase the risk of infohazards leaking or search for them.
AI safety has important potential backfire risks, like accelerating capabilities (or causing others to, intentionally or not), worsening differential progress, backfire s-risks. I know less about biorisk, but there are infohazards there, so that bringing more attention to biorisk can also increase the risk of infohazards leaking or search for them.