For example, the fact that it took us more than ten years to seriously consider the option of “slowing down AI” seems perhaps a bit puzzling. One possible explanation is that some of us have had a bias towards doing intellectually interesting AI alignment research rather than low-status, boring work on regulation and advocacy.
I’d guess it’s also that advocacy and regulation seemed just less marginally useful in most worlds with the suspected AI timelines of even 3 years ago?
I’d guess it’s also that advocacy and regulation seemed just less marginally useful in most worlds with the suspected AI timelines of even 3 years ago?
Definitely!