AI safety, governance, and alignment research and field buiilding.
GabeM
The case for conscious AI: Clearing the record [AI Consciousness & Public Perception]
How to reduce risks related to conscious AI: A user guide [Conscious AI & Public Perception]
Conscious AI: Will we know it when we see it? [Conscious AI & Public Perception]
Conscious AI & Public Perception: Four futures
Conscious AI concerns all of us. [Conscious AI & Public Perceptions]
Ah right, I was conflating GiveWell’s operating costs (I assume not too high?) and their funding sent to other charities both as “GiveWell continuing to work on global poverty.” You’re right that they’ll still probably work on it and not collapse without OP, just they might send much less to other charities.
GiveWell seems pretty dependent on OP funding, s.t. it might have to change its work with significantly less OP money.
An update on GiveWell’s funding projections — EA Forum (effectivealtruism.org)
Open Philanthropy’s 2023-2025 funding of $300 million total for GiveWell’s recommendations — EA Forum (effectivealtruism.org)
This is sublime, thank you!
Naming nitpick: Given the title, an expression of valuing transparency, and this being in the Bay, I originally thought this was about Zuckerberg’s Meta, not Meta-EA :)
Reposting an anonymous addition from someone who works in policy:
Your list of options mostly matches how I think about this. I would add:
Based on several anecdotal examples, the main paths I’m aware of for becoming a trusted technical advisor are “start with a relevant job like a policy fellowship, a job doing technical research that informs policy, or a non-policy technical job, and gradually earn a reputation for being a helpful expert.” To earn that reputation, some things you can do are: become one of the people who knows most about some niche but important area (anecdotally, “just” a few years of learning can be sufficient for someone to become a top expert in areas such as compute governance or high-skill immigration policy, since these are areas where no one has decades or experience — though there are also generalists who serve as trusted technical advisors); taking opportunities that come your way to advise policymakers (such opportunities can be common once you have your first policy job, or if you can draw on a strong network while doing primarily non-policy technical work); and generally being nice and respecting confidentiality. You don’t need to be a US citizen for doing this in the US context.
In addition to GovAI, other orgs where people can do technical research for AI policy include:
RAND and Epoch AI
Academia (e.g. I think the AI policy paper “What does it take to catch a Chinchilla?” was written as part of the author’s PhD work)
AI labs
[Question] How should technical AI researchers best transition into AI governance and policy?
That makes sense, thanks for the explanation! Yeah still a bit confused why they chose different numbers of years for the scientist and PhD, how those particular numbers arise, and why they’re so different (I’m assuming it’s 1 year of scientist funding or 5 years of PhD funding).
Pretty ambitious, thanks for attempting to quantify this!
Having only quickly skimmed this and not looked into your code (so could be my fault), I find myself a bit confused about the baselines: funding a single research scientist (I’m assuming this means at a lab?) or Ph.D. student for even 5 years seems to unclearly equivalent to 87 or 8 adjusted counterfactual years of research—I’d imagine it’s much less than that. Could you provide some intuition for how the baseline figures are calculated (maybe you are assuming second-order effects, like funded individuals getting interested in safety and doing more or it or mentoring others under them)?
climate since this is the one major risk where we are doing a good job
Perhaps (at least in the United States) we haven’t been doing a very good job on the communication front for climate change, as there are many social circles where climate change denial has been normalized and the issue has become very politically polarized with many politicians turning climate change from an empirical scientific problem into a political “us vs them” problem.
around the start of this year, the SERI SRF (not MATS) leadership was thinking seriously about launching a MATS-styled program for strategy/governance
I’m on the SERI (not MATS) organizing team. One person from SERI (henceforce meaning not MATS as they’ve rather split) was thinking about this in collaboration with some of the MATS leadership. The idea is currently not alive, but afaict didn’t strongly die (i.e. I don’t think people decided not to do it and cancelled things but rather failed to make it happen due to other priorities).
I think something like this is good to make happen though, and if others want to help make it happen, let me know and I’ll loop you in with the people who were discussing it.
Interesting results!
Does “TE” in the graphs mean “Time 1: Why Uncontrollable AI Looks More Likely Than Ever | Time” and “Time 2” mean “Time 2: The Only Way to Deal With the Threat From AI? Shut It Down | Time”? I was a bit confused.
Excited for this!
Nit: your logo seems to show the shrimp a bit curled up, which iirc is a sign that they’re dead and not a happy freely living shrimp (though it’s good thay they’re blue and not red).
Some discussion of this consideration in this thread: https://forum.effectivealtruism.org/posts/bBoKBFnBsPvoiHuaT/announcing-the-ea-merch-store?commentId=jaqayJuBonJ5K7rjp
Gotcha, I think I still disagree with you for most decision-relevant time periods (e.g. I think they’re likely better than chance on estimating AGI within 10 years vs 20 years)
Agree that they shouldn’t be ignored. By “you shouldn’t defer to them,” I just meant that it’s useful to also form one’s own inside view models alongside prediction markets (perhaps comparing to them afterwards).
lasting catastrophe?
perma-cataclysm?
hypercatastrophe?