I’ve been thinking a bit around secret efforts in AI safety research.
My current thoughts are around: if it is or does occur what non secret efforts might be needed? E.g. if it develops safe AI media that shows postive outcomes from AI might be needed so that people aren’t overly scared.
Oh and AI policy might be needed too, perhaps limiting certain types of AI (agentic stuff).
And if no one is working on it, is there an organisation that would be interested in starting working on it?