When I learned about x-risks in 2007 I decided to do something about it and the only thing I could do at the moment was to translate main articles by Bostrom and EY into Russian. I spent half a year doing it and it had two results. I improved my English and I learned a lot about the topic so I started to get my own ideas about it which resulted into a book in Russian.
So I think it is right thing to do—directly go to your goal.
Props for doing this. I was recently reflecting that it would be great to have a bunch of the LW Sequences or other works describing AI value-alignment problems translated into Chinese. If anyone who knows Chinese sees this and it seems like their kind of thing, I’d say go for it!
When I learned about x-risks in 2007 I decided to do something about it and the only thing I could do at the moment was to translate main articles by Bostrom and EY into Russian. I spent half a year doing it and it had two results. I improved my English and I learned a lot about the topic so I started to get my own ideas about it which resulted into a book in Russian. So I think it is right thing to do—directly go to your goal.
Props for doing this. I was recently reflecting that it would be great to have a bunch of the LW Sequences or other works describing AI value-alignment problems translated into Chinese. If anyone who knows Chinese sees this and it seems like their kind of thing, I’d say go for it!