GCR capacity-building grantmaking and projects at Open Phil.
Eli Rose
Dispelling the Anthropic Shadow
I thought this was interesting & forceful, and am very happy to see it in public writing.
Funding for programs and events on global catastrophic risk, effective altruism, and other topics
Funding for work that builds capacity to address risks from transformative AI
The full letter is available here — was recently posted online as part of this tweet thread.
Zach Robinson will be CEA’s next CEO
(meta musing) The conjunction of the negations of a bunch of statements seems a bit doomed to get a lot of disagreement karma, sadly. Esp. if the statements being negated are “common beliefs” of people like the ones on this forum.
I agreed with some of these and disagreed with others, so I felt unable to agreevote. But I strongly appreciated the post overall so I strong-upvoted.
Similar to that of our other roles, plus experience running a university group as an obvious one — I also think that extroversion and proactive communication are somewhat more important for these roles than for others.
Going to punt on this one as I’m not quite sure what is meant by “systems.”
This is too big to summarize here, unfortunately.
Check out “what kinds of qualities are you looking for in a hire” here. My sense is we index less on previous experience than many other organizations (though it’s still important). Experience juggling many tasks, prioritize, and syncing up with stakeholders jumps to mind. I have a hypothesis that consultant experience would be helpful for this role, but that’s a bit conjectural.
This is a bit TBD — happy to chat more further down the pipeline with any interested candidates.
We look for this in work tests and in previous experience.
The CB team continuously evaluates the track record of grants we’ve made when they’re up for renewal, and this feeds into our sense of how good programs are overall. We also spend a lot of time keeping up with what’s happening in CB and in x-risk generally, and this feeds into our picture of how well CB projects are working.
Check out “what kinds of qualities are you looking for in a hire” here.
Same answer as 2.
Empirically, in hiring rounds I’ve previously been involved in for my team at Open Phil, it has often seemed to be the case that if the top 1-3 candidates just vanished, we wouldn’t make a hire. I’ve also observed hiring rounds that concluded with zero hires. So, basically I dispute the premise that the top applicants will be similar in terms of quality (as judged by OP).
I’m sympathetic to the take “that seems pretty weird.” It might be that Open Phil is making a mistake here, e.g. by having too high a bar. My unconfident best-guess would be that our bar has been somewhat too high in the past, though this is speaking just for myself. I think when you have a lot of strategic uncertainty, as GCR teams often do, that pushes towards a higher hiring bar as you need people who have a wide variety of skills.
I’d probably also gently push back against the notion that our hiring pool is extremely deep, though that’s obviously relative. I think e.g. our TAIS roles will likely get many fewer applicants than roles for similar applicants doing safety research at labs, for a mix of reasons including salience to relevant people and the fact that OP isn’t competitive with labs on salary.
(As of right now, TAIS has only gotten 53 applicants across all its roles since the ad went up, vs. governance which has gotten ~2x as many — though a lot of people tend to apply right around the deadline.)
AMA: Six Open Philanthropy staffers discuss OP’s new GCR hiring round
Thanks for the reply.
I think “don’t work on climate change[1] if it would trade off against helping one currently identifiable person with a strong need” is a really bizarre/undesirable conclusion for a moral theory to come to, since if widely adopted it seems like this would lead to no one being left to work on climate change. The prospective climate change scientists would instead earn-to-give for AMF.
- ^
Or bettering relations between countries to prevent war, or preventing the rise of a totalitarian regime, etc.
- ^
Moreover, it’s common to assume that efforts to reduce the risk of extinction might reduce it by one basis point—i.e., 1⁄10,000. So, multiplying through, we are talking about quite low probabilities. Of course, the probability that any particular poor child will die due to malaria may be very low as well, but the probability of making a difference is quite high. So, on a per-individual basis, which is what matters given contractualism, donating to AMF-like interventions looks good.
It seems like a society where everyone took contractualism to heart might have a hard time coordinating on any large moral issues where the difference any one individual makes is small, including non-x-risk ones like climate change or preventing great power war. What does the contractualist position recommend on these issues?
(In climate change, it’s plausibly the case that “every little bit helps,” while in preventing war between great powers outcomes seem much more discontinuous — not sure if this matters.)
So, it may be true that some x-risk-oriented interventions can help us all avoid a premature death due to a global catastrophe; maybe they can help ensure that many future people come into existence. But how strong is any individual’s claim to your help to avoid an x-risk or to come into existence? Even if future people matter as much as present people (i.e., even if we assume that totalism is true), the answer is: Not strong at all, as you should discount it by the expected size of the benefit and you don’t aggregate benefits across persons. Since any given future person only has an infinitesimally small chance of coming into existence, they have an infinitesimally weak claim to aid.
There’s a Parfit thought experiment:
I go camping and leave a bunch of broken glass bottles in the woods. I realize that someone may step on this glass and hurt themselves, so perhaps I should bury it. I do not bury it. As it turns out, 20 years pass before someone is hurt. In 20 years, a young child steps on the glass and cuts their foot badly.
It seems like the contractualist principle above would say that there’s no moral value to burying the glass shard, because for any given individual, the probability that they’ll be the one to step on the glass shard is very low[1]. Is that right?
- ^
I think you can sidestep issues with population ethics here by just restricting this to people already alive today (so replace “young child” in the Parfit example with “adult” I guess). Though maybe the pop ethics issues are the crux?
- ^
Nick Beckstead is leaving the Effective Ventures boards
(I’m a trustee on the EV US board.)
Thanks for checking in. As Linch pointed out, we added Lincoln Quirk to the EV UK board in July (though he didn’t come through the open call). We also have several other candidates at various points in the recruitment pipeline, but we’ve put this a bit on the backburner both because we wanted to resolve some strategic questions before adding people to the board and also because we’ve been lower capacity than we thought.
Having said that, we were grateful for all the applications and nominations which we received in that initial post, and we’re still intending to add additional board members in the coming months.
- Will MacAskill has stepped down as trustee of EV UK by 21 Sep 2023 15:41 UTC; 141 points) (
- 21 Sep 2023 15:39 UTC; 126 points) 's comment on William_MacAskill’s Quick takes by (
Douglas Hoftstadter concerned about AI xrisk
Yep, it’s still active.
What is the base rate for Chinese citizens saying on polls that the Chinese government should regulate X, for any X?