But not so much on the “con” side—people have generally just thought about opportunity cost. Your point that it might speed up harmful (due to safety, misuse or structural risks) applications is a really useful and important one! Would be hard to weigh things up—getting into tricky differential technological development territory. Would love for there to be more thinking on this topic.
There’s been quite a bit written on the “pro” side:
https://www.cser.ac.uk/resources/bridging-concerns-about-ai/
https://www.cser.ac.uk/resources/bridging-gap-case-incompletely-theorized-agreement-ai-policy/
https://www.cser.ac.uk/resources/beyond-near-long-term/
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2976444
https://arxiv.org/abs/2012.08630
https://forum.effectivealtruism.org/posts/oR9tLNRSAep293rr5/why-those-who-care-about-catastrophic-and-existential-risk-2
Also ARCHES, Concrete Problems in AI safety, etc
But not so much on the “con” side—people have generally just thought about opportunity cost. Your point that it might speed up harmful (due to safety, misuse or structural risks) applications is a really useful and important one! Would be hard to weigh things up—getting into tricky differential technological development territory. Would love for there to be more thinking on this topic.