How should important ideas around topics like AI and biorisk be shared? Is there a best practice, or government departments that specialise in handling that?
I’ve been thinking a bit around secret efforts in AI safety research.
My current thoughts are around: if it is or does occur what non secret efforts might be needed? E.g. if it develops safe AI media that shows postive outcomes from AI might be needed so that people aren’t overly scared.
Oh and AI policy might be needed too, perhaps limiting certain types of AI (agentic stuff).
How should important ideas around topics like AI and biorisk be shared? Is there a best practice, or government departments that specialise in handling that?
I’ve been thinking a bit around secret efforts in AI safety research.
My current thoughts are around: if it is or does occur what non secret efforts might be needed? E.g. if it develops safe AI media that shows postive outcomes from AI might be needed so that people aren’t overly scared.
Oh and AI policy might be needed too, perhaps limiting certain types of AI (agentic stuff).