People from those orgs were aware, but none were keen enough about the idea to go as far as attempting a pilot run (e.g. the 2 week retreat idea). I think general downside risk aversion was probably a factor. This was in the pre-chatGPT days of a much narrower Overton Window though, so maybe it’s time for the idea to be revived? On the other hand, maybe it’s much less needed now there is government involvement, and nationalAI SafetyInstitutes attracting top talent.
Also, in general I’m personally much more sceptical of such a moonshot paying off, given shorter timelines and the possibility that x-safety from ASI may well be impossible. I think OP was 2022′s best idea for AI Safety. 2024′s is PauseAI.
People from those orgs were aware, but none were keen enough about the idea to go as far as attempting a pilot run (e.g. the 2 week retreat idea). I think general downside risk aversion was probably a factor. This was in the pre-chatGPT days of a much narrower Overton Window though, so maybe it’s time for the idea to be revived? On the other hand, maybe it’s much less needed now there is government involvement, and national AI Safety Institutes attracting top talent.
Also, in general I’m personally much more sceptical of such a moonshot paying off, given shorter timelines and the possibility that x-safety from ASI may well be impossible. I think OP was 2022′s best idea for AI Safety. 2024′s is PauseAI.