Here’s the introductory section of the doc, but feel free to not read this:
A bunch of people are separately working or have worked on collecting policy ideas that might be relevant to long-term/​extreme outcomes from AI. I’m not sure if these people all actually sharing their collections with each other would be good (e.g., maybe a given collection is too sensitive, or maybe it’d be better to have more independent thinking first). But probably some such sharing would be good, and it seems at least useful for these people to be aware of the fact that they’re all working on this sort of thing. So I quickly made this doc to list the collections I’m aware of.
I’ve put these in alphabetical order. Please let me know if there are other collections that you’re aware of. Also let me know if you have any other thoughts on whether this doc should exist at all, whether a different approach should be taken, etc.
Currently this doc is accessible only by the people who made the collections listed below, by other Rethink Priorities longtermism staff, and by a couple other people. I expect to share it with a few other people soon. I also currently intend to, at some later point, share it fairly liberally within the AI governance community, and perhaps to e.g. copy its contents into an EA Forum shortform, but I’ll check with the people whose collections are mentioned before doing so. Please let me know if you are vs aren’t happy for your collection to be listed in this doc and for the doc to be shared more widely.
Here’s the introductory section of the doc, but feel free to not read this: