I think it is much higher priority (from the perspective of reducing AI x-risk) to translate AI alignment concepts, particularly the AGI Safety Fundamentals course material. It takes a lot of inferences to go from “I’m interested in doing good” to “I like EA ideas” to “I think AI alignment is important” to “I want to work on AI, where can I start?” And even if many Mandarin speakers reach that last point through a Mandarin translation of 80,000 Hours, they will currently find very few (if any?) structuredopportunities to skillup for AI alignment.
Excellent, I’m happy to see that! However, I’m concerned that the proposal focuses entirely on translating general EA concepts.
I think it is much higher priority (from the perspective of reducing AI x-risk) to translate AI alignment concepts, particularly the AGI Safety Fundamentals course material. It takes a lot of inferences to go from “I’m interested in doing good” to “I like EA ideas” to “I think AI alignment is important” to “I want to work on AI, where can I start?” And even if many Mandarin speakers reach that last point through a Mandarin translation of 80,000 Hours, they will currently find very few (if any?) structured opportunities to skill up for AI alignment.
Additionally, I don’t think one needs to know about longtermism and QALYs and PlayPumps to recognize the importance of AI alignment work. Nor does one need to care about doing as much good as possible with their career. One only needs to grasp why AI might be extremely dangerous and why advanced capabilities might be coming soon.
One more point is that translating AI alignment resources may have lower risks than translating general EA content.