GiveWell seems pretty dependent on OP funding, s.t. it might have to change its work with significantly less OP money.
An update on GiveWell’s funding projections — EA Forum (effectivealtruism.org)
Open Philanthropy’s 2023-2025 funding of $300 million total for GiveWell’s recommendations — EA Forum (effectivealtruism.org)
GMM
The case for conscious AI: Clearing the record [AI Consciousness & Public Perception]
How to reduce risks related to conscious AI: A user guide [Conscious AI & Public Perception]
Conscious AI: Will we know it when we see it? [Conscious AI & Public Perception]
Conscious AI & Public Perception: Four futures
Conscious AI concerns all of us. [Conscious AI & Public Perceptions]
This is sublime, thank you!
Reposting an anonymous addition from someone who works in policy:
Your list of options mostly matches how I think about this. I would add:
Based on several anecdotal examples, the main paths I’m aware of for becoming a trusted technical advisor are “start with a relevant job like a policy fellowship, a job doing technical research that informs policy, or a non-policy technical job, and gradually earn a reputation for being a helpful expert.” To earn that reputation, some things you can do are: become one of the people who knows most about some niche but important area (anecdotally, “just” a few years of learning can be sufficient for someone to become a top expert in areas such as compute governance or high-skill immigration policy, since these are areas where no one has decades or experience — though there are also generalists who serve as trusted technical advisors); taking opportunities that come your way to advise policymakers (such opportunities can be common once you have your first policy job, or if you can draw on a strong network while doing primarily non-policy technical work); and generally being nice and respecting confidentiality. You don’t need to be a US citizen for doing this in the US context.
In addition to GovAI, other orgs where people can do technical research for AI policy include:
RAND and Epoch AI
Academia (e.g. I think the AI policy paper “What does it take to catch a Chinchilla?” was written as part of the author’s PhD work)
AI labs
That makes sense, thanks for the explanation! Yeah still a bit confused why they chose different numbers of years for the scientist and PhD, how those particular numbers arise, and why they’re so different (I’m assuming it’s 1 year of scientist funding or 5 years of PhD funding).
Pretty ambitious, thanks for attempting to quantify this!
Having only quickly skimmed this and not looked into your code (so could be my fault), I find myself a bit confused about the baselines: funding a single research scientist (I’m assuming this means at a lab?) or Ph.D. student for even 5 years seems to unclearly equivalent to 87 or 8 adjusted counterfactual years of research—I’d imagine it’s much less than that. Could you provide some intuition for how the baseline figures are calculated (maybe you are assuming second-order effects, like funded individuals getting interested in safety and doing more or it or mentoring others under them)?
around the start of this year, the SERI SRF (not MATS) leadership was thinking seriously about launching a MATS-styled program for strategy/governance
I’m on the SERI (not MATS) organizing team. One person from SERI (henceforce meaning not MATS as they’ve rather split) was thinking about this in collaboration with some of the MATS leadership. The idea is currently not alive, but afaict didn’t strongly die (i.e. I don’t think people decided not to do it and cancelled things but rather failed to make it happen due to other priorities).
I think something like this is good to make happen though, and if others want to help make it happen, let me know and I’ll loop you in with the people who were discussing it.
Interesting results!
Does “TE” in the graphs mean “Time 1: Why Uncontrollable AI Looks More Likely Than Ever | Time” and “Time 2” mean “Time 2: The Only Way to Deal With the Threat From AI? Shut It Down | Time”? I was a bit confused.
Excited for this!
Nit: your logo seems to show the shrimp a bit curled up, which iirc is a sign that they’re dead and not a happy freely living shrimp (though it’s good thay they’re blue and not red).
Some discussion of this consideration in this thread: https://forum.effectivealtruism.org/posts/bBoKBFnBsPvoiHuaT/announcing-the-ea-merch-store?commentId=jaqayJuBonJ5K7rjp
Agree that they shouldn’t be ignored. By “you shouldn’t defer to them,” I just meant that it’s useful to also form one’s own inside view models alongside prediction markets (perhaps comparing to them afterwards).
aren’t more reliable than chance
Curious what you mean by this. One version of chance is “uniform prediction of AGI over future years” which obviously seems worse than Metaculus, but perhaps you meant a more specific baseline?
Personally, I think forecasts like these are rough averages of what informed individuals would think about these questions. Yes, you shouldn’t defer to them, but it’s also useful to recognize how that community’s predictions have changed over time.
Ha thanks Vael! Yeah, that seems hard to standardize but potentially quite useful to use levels like these for hiring, promotions, and such. Let me know how it goes if you try it!
Thanks! Forgot about cloud computing, added a couple of courses to the Additional Resources of Level 4: Deep Learning.
Oh lol I didn’t realize that was a famous philosopher until now, someone commented from a Google account with that name! Removed Ludwig.
Sure!
lasting catastrophe?
perma-cataclysm?
hypercatastrophe?