I’m good at explaining alignment to people in person, including to policymakers.
I got 250k people to read HPMOR and sent 1.3k copies to winners of math and computer science competitions; have taken the GWWC pledge; created a small startup that donated >100k$ to effective nonprofits.
I have a background in ML and strong intuitions about the AI alignment problem. In the past, I studied a bit of international law (with a focus on human rights) and wrote appeals that won cases against the Russian government in Russian courts. I grew up running political campaigns.
I’m interesting in chatting to potential collaborators and comms allies.
My website: https://contact.ms
Schedule a call with me: https://contact.ms/ea30
We’re making translations of a lot of EA content into Russian and have an experience that might be relevant to countries where people mostly can’t speak English.
We learned that you need introspective people to evaluate or do the translations.
The best professional translators in our language are mostly hired by large publishing houses and have long-term commitments for many books to come (and the publishing houses aren’t able to lease them for our projects).
Surprisingly, looking at a translation that looks like a good text in our language but has significant mistakes, most people wouldn’t notice the mistakes. Most people who aren’t the best professional translators don’t actively try to recognize what exactly they just read. When a translation expresses something really different from what the original text conveys, but the words are similar enough, people just don’t notice it.
Google Translate was better at not making mistakes than 80% of translators that sent us a translated test segment. We ended up hiring two translators from the EA/LW community.
A lot of the 80,000 Hours’ articles are highly optimized for conveying a correct understanding, and translation errors might significantly reduce the value of the texts.