AI safety, governance, and alignment research and field buiilding.
Gabriel Mukobi
Levelling Up in AI Safety Research Engineering
The Tree of Life: Stanford AI Alignment Theory of Change
Is Civilization on the Brink of Collapse? - Kurzgesagt
[Question] How should technical AI researchers best transition into AI governance and policy?
aren’t more reliable than chance
Curious what you mean by this. One version of chance is “uniform prediction of AGI over future years” which obviously seems worse than Metaculus, but perhaps you meant a more specific baseline?
Personally, I think forecasts like these are rough averages of what informed individuals would think about these questions. Yes, you shouldn’t defer to them, but it’s also useful to recognize how that community’s predictions have changed over time.
I had no idea there were this many nematodes—is wild animal welfare just nematode welfare?! Do Rethink, WAI, or others have any research on them?
GiveWell seems pretty dependent on OP funding, s.t. it might have to change its work with significantly less OP money.
An update on GiveWell’s funding projections — EA Forum (effectivealtruism.org)
Open Philanthropy’s 2023-2025 funding of $300 million total for GiveWell’s recommendations — EA Forum (effectivealtruism.org)
Thanks for this post! I appreciate the transparency, and I’m sorry for all this suckiness.
Could one additional easyish structural change be making applications due even earlier for EAGx? I feel like the EA community has a bad tendency of having apps for things open until very soon before the actual thing, and maybe an earlier due date gives people more time to figure out if they’re going and creates more buffer before catering number deadlines. Ofc, this costs some extra organizer effort as you have to plan more ahead, but I expect that’s more of a shifting thing rather than an whole lot of extra work.
climate since this is the one major risk where we are doing a good job
Perhaps (at least in the United States) we haven’t been doing a very good job on the communication front for climate change, as there are many social circles where climate change denial has been normalized and the issue has become very politically polarized with many politicians turning climate change from an empirical scientific problem into a political “us vs them” problem.
Thanks for making this!
In the future, you may want to ask just one full name question for people who don’t fit neatly into the first name/last name split.
around the start of this year, the SERI SRF (not MATS) leadership was thinking seriously about launching a MATS-styled program for strategy/governance
I’m on the SERI (not MATS) organizing team. One person from SERI (henceforce meaning not MATS as they’ve rather split) was thinking about this in collaboration with some of the MATS leadership. The idea is currently not alive, but afaict didn’t strongly die (i.e. I don’t think people decided not to do it and cancelled things but rather failed to make it happen due to other priorities).
I think something like this is good to make happen though, and if others want to help make it happen, let me know and I’ll loop you in with the people who were discussing it.
Thanks for building this, def seems like a way to save a lot of organizer time (and I appreciate how it differentiates things from a Bible group or a cult)!
To me, it seems like the main downside will be the lack of direct engagement between new people and established EAs. In a normal reading group, a participant meets and talks with a facilitator on day 1, and then every week between every 1-3 hours of EA-related reading. In this system, it seems like they don’t really get to meet and talk with someone until they go through a significant amount of independent exploration and write a reflection, and I wonder if the combination of that high required activation energy with little human-to-human guidance might cause you to lose some potentially good students as you go from the predicted 40 to 20.
You could try offering “cheaper” 1:1s to these people early, but that seems less efficient than having several of them in a weekly reading group discussion which would defeat the point. That’s not to say I don’t think this is the right move for your situation. Just that I’m extra curious about how this factor might play out, and I’m excited for you to test this system and share the results with other groups!
I can’t unread this comment:
“Humanity could, theoretically, last for millions of centuries on Earth alone.” I find this claim utterly absurd. I’d be surprised if humanity outlasts this century.
Ughh they’re so close to getting it! Maybe this should give me hope?
Thanks for writing this, just filled out the form! I’m excited for a more coordinated community around AI safety field-building in universities!
They just added to it so it’s now “Is Civilization on the Brink of Collapse? And Could We Recover?” but it still seems to not answer the first question.
Good points! This reminds me of the recent Community Builders Spend Too Much Time Community Building post. Here are some thoughts about this issue:
Field-building and up-skilling don’t have to be orthogonal. I’m hopeful that a lot of an organizer’s time in such a group would involve doing the same things general members going through the system would be doing, like facilitating interesting reading group discussions or working on interesting AI alignment research projects. As the too much time post suggests, maybe just doing the cool learning stuff is a great way to show that we’re serious, get new people interested, and keep our group engaged.
Like Trevor Levin says in that reply, I think field-building is more valuable now than it will be as we get closer to AGI, and I think direct work will be more valuable later than it is now. Moreso, I think field-building while you’re a university student is significantly more valuable than field-building after you’ve graduated.
I don’t necessarily think the most advanced students will always need to be organizers under this model. I think there’s a growing body of EAs who want to help with AI alignment field-building but don’t necessarily think they’re the best fit for direct work (maybe they’re underconfident though), and this could be a great opportunity for them to help with little opportunity costs.
I’m really hopeful about several new orgs people are starting for field-wide infrastructure that could help offset a lot of the operational costs of this, including orgs that might be able to hire professional ops people to support a local group.
That’s not to say I recommend every student who’s really into AI safety delay their personal growth to work on starting a university group. Just that if you have help and think you could have a big impact, it might be worth considering letting off the solo up-skilling pedal to add in some more field-building.
Excited for this!
Nit: your logo seems to show the shrimp a bit curled up, which iirc is a sign that they’re dead and not a happy freely living shrimp (though it’s good thay they’re blue and not red).
Some discussion of this consideration in this thread: https://forum.effectivealtruism.org/posts/bBoKBFnBsPvoiHuaT/announcing-the-ea-merch-store?commentId=jaqayJuBonJ5K7rjp
Agree that they shouldn’t be ignored. By “you shouldn’t defer to them,” I just meant that it’s useful to also form one’s own inside view models alongside prediction markets (perhaps comparing to them afterwards).
I’ll also plug Microsoft Edge as a great tool for this: There’s both a desktop browser and a mobile app, and it has a fantastic built-in Read Aloud feature that works in both. You just click the Read Aloud icon or press Ctrl/Cmd+Shift+U on a keyboard and it will start reading your current web page or document out loud!
It has hundreds of neural voices (Microsoft calls them “Natural” voices) in dozens of languages and dialects, and you can change the reading speed too. I find the voices to be among the best I’ve heard, and the super low activation energy of not having to copy-paste anything or switch to another window means I use it much more often than when I tried apps like Neural Reader.
Sidenote, but as a browser, since it’s Chromium-based it’s basically the same as Google Chrome (you can even install extensions from the Chrome Web Store) but with slightly less bloat and better performance.
I agree. I think it’s also worth pointing out that P(smart person|goes to school X) is a different metric from Count(smart person|goes to school X) (the total number of people matching some criteria). One takeaway I got from the post is that while the probabilities might still be different between schools (“75th percentile SAT score falls very gradually”), the number of “smart” students might be comparable at different schools because many non-elite schools tend to be larger than private elite schools (further, since there are also many more non-elite schools, I might expect Count(smart person|goes to non–elite school) >Count(smart person|goes to elite school) but that’s besides the point).
Practically, for EA outreach, this maybe implies that university outreach might be harder at big non-elite-but-still-good schools: as Joseph points out below, the median student might be less qualified, so you’d have to sample more to “sift” through all the students and find the top students. But EA outreach isn’t random sampling—it appeals to certain aptitudes, can use nerd-sniping/self-selection effects, and can probably be further improved to select for the most smart/agentic/thoughtful/truth-seeking/whatever you care about students, and there might be a comparable number of those at different universities regardless of elite-ness. If the heavy tail of impact is what matters, this makes me update towards believing EA outreach (and students starting EA groups) at non-elite-but-still-good universities could be as good as at elite universities.