Readers: Here’s a spreadsheet with the above Taxonomy, and some columns which I’m hoping we can collectively populate with some useful pointers for each topic:
Does [academic] work in this topic help with reducing GCRs/X-risks from AI?
What’s the theory of change[1] for this topic?
What skills does this build, that are useful for AI existential safety?
What are some Foundational Papers in this topic?
What are some Survey Papers in this topic?
Which academic labs are doing meaningful work on this topic?
What are the best academic venues/workshops/conferences/journals for this topic?
What other projects are working on this topic?
Any guidance on how to get involved, who to speak with etc. about this topic?
For security reasons, I have not made it ‘editable’, but please comment on the sheet and I’ll come by in a few days and update the cells.
[1] softly categorised as Plausible, Hope, Grand Hope
Thanks for doing this, Ben!
Readers: Here’s a spreadsheet with the above Taxonomy, and some columns which I’m hoping we can collectively populate with some useful pointers for each topic:
Does [academic] work in this topic help with reducing GCRs/X-risks from AI?
What’s the theory of change[1] for this topic?
What skills does this build, that are useful for AI existential safety?
What are some Foundational Papers in this topic?
What are some Survey Papers in this topic?
Which academic labs are doing meaningful work on this topic?
What are the best academic venues/workshops/conferences/journals for this topic?
What other projects are working on this topic?
Any guidance on how to get involved, who to speak with etc. about this topic?
For security reasons, I have not made it ‘editable’, but please comment on the sheet and I’ll come by in a few days and update the cells.
[1] softly categorised as Plausible, Hope, Grand Hope