Are there any promising directions for AGI x-risk reduction that you are aware of that aren’t being (significantly) explored?
Current theme: default
Less Wrong (text)
Less Wrong (link)
Are there any promising directions for AGI x-risk reduction that you are aware of that aren’t being (significantly) explored?