[Question] Intellectual property of AI and existential risk in general?

Does anyone know of work being done around what makes good intellectual laws around technologies that can shape the world in catastrophic ways or mitigate catastrophes. Tight intellectual property laws reduce the number of people that have to be coordinated with in order to control the technology but also can introduce bottle necks in the introduction and usage of technologies (like coordination technologies) that could be used to mitigate risks.

Are there people thinking about this?

No comments.