Holden has been working on independent projects, e.g. related to RSPs; the AI teams at Open Phil no longer report to him and he doesn’t approve grants. We all still collaborate to some degree, but new hires shouldn’t e.g. expect to work closely with Holden.
lukeprog
We fund a lot of groups and individuals and they have a lot of different (and sometimes contradicting) policy opinions, so the short answer is “yes.” In general, I really did mean the “tentative” in my 12 tentative ideas for US AI policy, and the other caveats near the top are also genuine.
That said, we hold some policy intuitions more confidently than others, and if someone disagreed pretty thoroughly with our overall approach and they also weren’t very persuasive that their alternate approach would be better for x-risk reduction, then they might not be a good fit for the team.
Echoing Eli: I’ve run ~4 hiring rounds at Open Phil in the past, and in each case I think if the top few applicants disappeared, we probably just wouldn’t have made a hire, or made significantly fewer hires.
Indeed. There aren’t hard boundaries between the various OP teams that work on AI, and people whose reporting line is on one team often do projects for or with a different team, or in another team’s “jurisdiction.” We just try to communicate about it a lot, and our team leads aren’t very possessive about their territory — we just want to get the best stuff done!
The hiring is more incremental than it might seem. As explained above, Ajeya and I started growing our teams earlier via non-public rounds, and are now just continuing to hire. Claire and Andrew have been hiring regularly for their teams for years, and are also just continuing to hire. The GCRCP team only came into existence a couple months ago and so is hiring for that team for the first time. We simply chose to combine all these hiring efforts into one round because that makes things more efficient on the backend, especially given that many people might be a fit for one or more roles on multiple teams.
- 21 Oct 2023 18:37 UTC; 19 points) 's comment on New roles on my team: come build Open Phil’s technical AI safety program with me! by (
The technical folks leading our AI alignment grantmaking (Daniel Dewey and Catherine Olsson) left to do more “direct” work elsewhere a while back, and Ajeya only switched from a research focus (e.g. the Bio Anchors report) to an alignment grantmaking focus late last year. She did some private recruiting early this year, which resulted in Max Nadeau joining her team very recently, but she’d like to hire more. So the answer to “Why now?” on alignment grantmaking is “Ajeya started hiring soon after she switched into a grantmaking role. Before that, our initial alignment grantmakers left, and it’s been hard to find technical folks who want to focus on grantmaking rather than on more thoroughly technical work.”
Re: the governance team. I’ve lead AI governance grantmaking at Open Phil since ~2019, but for a few years we felt very unclear about what our strategy should be, and our strategic priorities shifted rapidly, and it felt risky to hire new people into a role that might go away through no fault of their own as our strategy shifted. In retrospect, this was a mistake and I wish we’d started to grow the team at least as early as 2021. By 2022 I was finally forced into a situation of “Well, even if it’s risky to take people on, there is just an insane amount of stuff to do and I don’t have time for ~any of it, so I need to hire.” Then I did a couple non-public hiring rounds which resulted in recent new hires Alex Lawsen, Trevor Levin, and Julian Hazell. But we still need to hire more; all of us are already overbooked and turning down opportunities for lack of bandwidth constantly.
- 21 Oct 2023 18:37 UTC; 19 points) 's comment on New roles on my team: come build Open Phil’s technical AI safety program with me! by (
Cool stuff. Do you only leverage prediction markets, or do you also leverage prediction polls (e.g. Metaculus)? My sense of the research so far is that they tend to be similarly accurate with similar numbers of predictors, with perhaps a slight edge for prediction polls.
Damn.
There are now also two superforecaster forecasts about this:
Another historical point I’d like to make is that the common narrative about EA’s recent “pivot to longtermism” seems mostly wrong to me, or at least more partial and gradual than it’s often presented to be, because all four leading strands of EA — (1) neartermist human-focused stuff, mostly in the developing world, (2) animal welfare, (3) long-term future, and (4) meta — were all major themes in the movement since its relatively early days, including at the very first “EA Summit” in 2013 (see here), and IIRC for at least a few years before then.
- 12 Feb 2023 13:45 UTC; 22 points) 's comment on What are the best examples of object-level work that was done by (or at least inspired by) the longtermist EA community that concretely and legibly reduced existential risk? by (
- 7 Feb 2023 17:09 UTC; 13 points) 's comment on Proposal: Create A New Longtermism Organization by (
- 12 Feb 2023 14:03 UTC; 8 points) 's comment on What are the best examples of object-level work that was done by (or at least inspired by) the longtermist EA community that concretely and legibly reduced existential risk? by (
What’s his guess about how “% of humans enslaved (globally)” evolved over time? See e.g. my discussion here.
Looks great!
How many independent or semi-independent abolitionist movements were there around the world during the period of global abolition, vs. one big one that started with Quakers+Britain and then was spread around the world primarily by Europeans? (E.g. see footnote 82 here.)
Re: more neurons = more valenced consciousness, does the full report address the hidden qualia possibility? (I didn’t notice it at a quick glance.) My sense was that people who argue for more neurons = more valenced consciousness are typically assuming hidden qualia, but your objections involving empirical studies are presumably assuming no hidden qualia.
- 30 Nov 2022 20:36 UTC; 1 point) 's comment on Why Neuron Counts Shouldn’t Be Used as Proxies for Moral Weight by (
I really appreciate this format and would love to see other inaccurate articles covered in this way (so long as the reviewer is intellectually honest, of course).
I suspect this is because there isn’t a globally credible/legible consensus body generating or validating the forecasts, akin to the IPCC for climate forecasts that are made with even longer time horizons.
Cool, I might be spending a few weeks in Belgrade sometime next year! I’ll reach out if that ends up happening. (Writing from Dubrovnik now, and I met up with some rationalists/EAs in Zagreb ~1mo ago.)
I’ll also note that GCRs was the original name for this part of Open Phil, e.g. see this post from 2015 or this post from 2018.