GCR capacity-building grantmaking and projects at Open Phil.
Eli Rose
I’ll stand by the title here. I think a bilingual person without specific training in translation can have good taste in determining whether or not a given translation is high-quality. These seem like distinct skills, e.g. in English I’m able to recognize a work badly translated from French even if I don’t speak French and couldn’t produce a better one. And having good taste seems like the most important skill for someone who is vetting and contracting with professional translators.
Separately, I also think that many (but not all) bilingual people without specific training in translation can themselves do good translation work. The results of our pilot project moved me towards this view (from a prior position that put a decent amount of weight on it).
As a high-level note, I see the goal here as enabling people to engage with EA ideas where they couldn’t before. It’s important that quality be high enough that the ideas are transmitted with good fidelity. But I don’t think we need to adhere to an extremely high and rigorous standard of the type one might have when translating a literary work, e.g. I don’t think we need translations to read so fluently that one forgets the material was originally written in English. I think this work is urgent and important, and I think the opportunity costs of imposing that kind of standard would be significant.
Hi Zakariyau. This seems like it definitely meets the criteria of a language with >5m speakers — I don’t have the context, but I don’t think English being the official language would be a barrier of any kind.
Unfortunately I think this kind of experimental approach is a bad fit here; opportunity costs seem really high, there’s a small number of data points, and there’s a ton of noise from other factors that language communities vary along.
Fortunately I think we’ll have additional context that will help us assess the impacts of these grants beyond a black-box “did this input lead to this output” analysis.
Hi Nathan — I think that probably wouldn’t make sense in this case, as I think it’s important for the person leading a given translation project to understand EA and related ideas well, even if translators they hire do not.
Yep, this list isn’t intended to rule anything out. We’d certainly be interested in getting applications from people who want to get content translated into Hindi or other Indian languages.
Ah, that’s my bad — thanks, fixed.
Thanks, really appreciate the concrete suggestion! This seems like a good lead for anyone who wants to supervise Polish translation.
Open Phil is seeking bilingual people to help translate web content related to AI safety, global catastrophic risks, effective altruism and adjacent topics into non-English languages
Cool, looking forward to talking about these.
I think this Wikipedia claim is from Reagan’s autobiography. But according to The Dead Hand, written by a third-party historian, Reagan was already very concerned about nuclear war by this time, and had been at least since his campaign in 1980. It’s pretty interesting — apparently this concern led to both his interest in nuclear weapon abolition (which he mostly didn’t talk about) and in his unrealistic and harmful missile defense plans.
So according to this book, The Day After wasn’t actually any kind of turning point.
The answer is yes, I can think of some projects in this general area that sound good to me. I’d encourage you to email me or sign up to talk to me about your ideas and we can go from there. As is always the case, a lot rides on further specifics about the project — i.e. just the bare fact that something is focused on mid-career professionals in tech doesn’t give me a lot of info about whether it’s something we’d want to fund or not.
[AMA] Open Philanthropy is still seeking proposals for outreach and community-building projects
(I work at Open Phil on community-building grantmaking.)
This role seems quite high-impact to me and I’d encourage anyone on the fence to apply. Our 2020 survey leads me to believe that 80k has been very impactful in terms of contributing to the trajectories of people who are now doing important longtermist work. I think good marketing work could significantly increase the number of people that 80k reaches, and the impact of doing this quickly and well seems competitive with a lot of other community-building work to me — one reason for this is that I think one digital marketer can effectively deploy a lot of funding.
Is there an equally high level of expert consensus on the existential risks posed by AI?
There isn’t. I think a strange but true and important fact about the problem is that it just isn’t a field of study in the same way e.g. climate science is — as argued in this Cold Takes post. So it’s unclear who the relevant “experts” should be. Technical AI researchers are maybe the best choice, but they’re still not a good one; they’re in the business of making progress locally, not forecasting what progress will be globally and what effects that will have.
Likewise.
Lol
I think this is a good question and there are a few answers to it.
One is that many of these jobs only look like they check the “improving the world” box if you have fairly unusual views. There aren’t many people in the world for whom e.g. “doing research to prevent future AI systems from killing us all” tracks as an altruistic activity. It’s interesting to look at this (somewhat old) estimate of how many EAs even exist.
Another is that many of the roles discussed here aren’t research-y roles (e.g. the biosecurity projects require entrepreneurship, not research).
Another is that the type of research involved (when the roles are in fact research roles) is often difficult, messy, and unrewarding. AI alignment, for instance, is a pre-paradigmatic field. The problem statement has no formal definition. The objects of study (broadly superhuman AI systems) don’t yet exist and therefore can’t be experimented upon. Out of all possible research that could be done in academia, “expected tractability” is a large factor in determining what questions people try to tackle. But when you’re filtering strongly for impact as EA is, you can no longer select strongly for tractability. So it’s much more likely that things will be a confusing muddle that it’s difficult to make clear progress on.
What I’m talking about tends to be more of an informal thing which I’m using “EMH” as a handle for. I’m talking about a mindset where, when you think of something that could be an impactful project, your next thought is “but why hasn’t EA done this already?” I think this is pretty common and it’s reasonably well-adapted to the larger world, but not very well-adapted to EA.
EMH says that we shouldn’t expect great opportunities to make money to just be “lying around” ready for anyone to take. EMH says that, if you have an amazing startup idea, you have to answer “why didn’t anyone do this before?” (ofc. this is a simplification, EMH isn’t really one coherent view)
One might also think that there aren’t great EA projects just “lying around” ready for anyone to do. This would be an “EMH for EA.” But I think it’s not true.
We’re interested in increasing the diversity of the longtermist community along many different axes. It’s hard to give a unified ‘strategy’ at this abstract level, but one thing we’ve been particularly excited about recently is outreach in non-Western and non-English-speaking countries.
Yes, you can apply for a grant under these circumstances. It’s possible that we’ll ask you to come back once more aspects of the plan are figured out, but we have no hard rules about that. And yes, it’s possible to apply for funding conditional on some event and later return the money/adjust the amount you want downwards if the event doesn’t happen.