Kabir Kumar: https://www.linkedin.com/in/kabir-kumar-324b02b8/
Josh Thorsteinson 🔸
Karma: 104
Lessons learned from starting an AI safety university group
Interesting, thanks for the feedback. That’s encouraging for AI safety groups—it’s easier to involve undergrads than grad students.
Thanks Tzu!
The Short Timelines Strategy for AI Safety University Groups
They were referring to this quote, from the linked post: “Median Estimate for when 99% of currently fully remote jobs will be automatable.”
Automatable doesn’t necessarily imply that the jobs are actually automated.
It may be worth trying, but I don’t think a voluntary moratorium for AI would work.
The Asilomar Conference was preceded by a seven-month voluntary pause on dangerous DNA splicing research, but I think this would be much harder to do for AI.
Some differences:
1. The molecular biology community in 1975 was small and mostly concentrated in the US, but the AI research community today is massive and globally dispersed.
2. Despite the pause being effective in the West, it didn’t stop the USSR’s DNA splicing research in its secret biological weapons program.
3. The voluntary DNA splicing pause was only for a few months, and the scientists believed their research would resume after the Asilomar Conference. An effective AI pause would ideally be much longer than that, probably without a defined end date.
4. FLI already called for a six-month pause in 2023 - it was ignored.
5. There are far greater incentives for TAI than recombinant DNA research.
I haven’t looked into other historical voluntary moratoriums in depth but I don’t think the bottom line would be different.
Thanks for this post, I think it’s great! Just adding my perspective on this part since I’ve researched this topic before.