I think EAs focused on x-risks are typically pretty gung-ho about improving international cooperation and coordination, but it’s hard to know what would actually be effective for reducing x-risk, rather than just e.g. writing more papers about how cooperation is desirable. There are a few ideas I’m exploring in the AI governance area, but I’m not sure how valuable and tractable they’ll look upon further inspection. If you’re curious, some concrete ideas in the AI space are laid out here and here.
Great points. I wonder if building awareness of x-risk in the general public (i.e. outside EAs) could help increase tractability and make research papers on cooperation more likely to get put into practice.
I’m curious which ideas you’re exploring too. I saw your post on the topic from last year. Reading some of the research linked there has been super helpful!
Thanks for linking these resources too. Looking forward to reading them.
I think EAs focused on x-risks are typically pretty gung-ho about improving international cooperation and coordination, but it’s hard to know what would actually be effective for reducing x-risk, rather than just e.g. writing more papers about how cooperation is desirable. There are a few ideas I’m exploring in the AI governance area, but I’m not sure how valuable and tractable they’ll look upon further inspection. If you’re curious, some concrete ideas in the AI space are laid out here and here.
Great points. I wonder if building awareness of x-risk in the general public (i.e. outside EAs) could help increase tractability and make research papers on cooperation more likely to get put into practice.
I’m curious which ideas you’re exploring too. I saw your post on the topic from last year. Reading some of the research linked there has been super helpful!
Thanks for linking these resources too. Looking forward to reading them.