Interesting, thanks.
lukeprog
Yeah CSET isn’t an EA think tank, though a few EAs have worked there over the years.
Yes, this is part of the reason I personally haven’t prioritized funding European think tanks much, in addition to my grave lack of context on how policy and politics works in the most AI-relevant European countries.
Can you say more about why you recommend not pursuing formal certificates? Does that include even the “best” ones, e.g. from SANS? I’ve been recommending people go for them, because they (presumably) provide a guided way to learn lots of relevant skills, and are a useful proof of skill to prospective employers, even though of course the actual technical and analytic skills are ultimately what matter.
What EA orgs do you have in mind? I guess this would be policy development at places like GovAI and maybe Rethink Priorities? My guess is that the policy-focused funding for EAish orgs like that is dwarfed by the Open Phil funding for CSET and CHS alone, which IIRC is >$130M so far.
Yes, we (Open Phil) have funded, and in some cases continue to fund, many non-EA think tanks, including the six you named and also Brookings, National Academies, Niskanen, Peterson Institute, CGD, CSIS, CISAC, CBPP, RAND, CAP, Perry World House, Urban Institute, Economic Policy Institute, Roosevelt Institute, Dezernat Zukunft, Sightline Institute, and probably a few others I’m forgetting.
I don’t know why the original post claimed “it is pretty rare for EAs to fund non-EA think tanks to do things.”
I donated $5800.
When I ran two recruiting rounds for Open Philanthropy in ~2018, IIRC our policy was to offer feedback to those who requested it and made it past a certain late stage of the process, but not to everyone, because we had >1000 applicants and couldn’t afford the time to write custom feedback for anywhere near that many people. Not sure what our current practice is.
I’m not religious but (at a glance) I feel happy this book exists.
Nice overview! I’m broadly on board with this framing.
One quibble is that I wish this post was clearer about how the example actions, outputs, and institutions you list are not always themselves motivated by longtermist or x-risk considerations, though many people who are motivated by longtermism/x-risk tend to think of the example outputs you list as more relevant to longtermist/x-risk considerations than many other reports and topics in the broader space of AI governance. E.g. w.r.t. “who’s doing it,” there are very few people at CSET or TFS who are working on these issues from something like a longtermist lens, there are relatively more at DeepMind or OpenAI (but not a majority), and then some orgs are majority/exclusively motivated by a longtermist/x-risk lens (e.g. FHI and the AI program team at Open Phil).
The authors will have a more-informed answer, but my understanding is that part of the answer is “some ‘disentanglement’ work needed to be done w.r.t. biosecurity for x-risk reduction (as opposed to biosecurity for lower-stakes scenarios).”
I mention this so that I can bemoan the fact that I think we don’t have a similar list of large-scale, clearly-net-positive projects for the purpose of AI x-risk reduction, in part because (I think) the AI situation is more confusing and requires more and harder disentanglement work (some notes on this here and here). The Open Phil “worldview investigations” team (among others) is working on such disentanglement research for AI x-risk reduction and I would like to see more people tackle this strategic clarity bottleneck, ideally in close communication with folks who have experience with relatively deep, thorough investigations of this type (a la Bio Anchors and other Open Phil worldview investigation reports) and in close communication with folks who will use greater strategic clarity to take large actions.
Hi Michael,
I don’t have much time to engage on this, but here are some quick replies:
I don’t know anything about your interactions with GiveWell. My comment about ignoring vs. not-ignoring arguments about happiness interventions was about me / Open Phil, since I looked into the literature in 2015 and have read various things by you since then. I wouldn’t say I ignored those posts and arguments, I just had different views than you about likely cost-effectiveness etc.
On “weakly validated measures,” I’m talking in part about lack of IRT validation studies for SWB measures used in adults (NIH funded such studies for SWB measures in kids but not adults, IIRC), but also about other things. The published conversation notes only discuss a small fraction of my findings/thoughts on the topic.
On “unconvincing intervention studies” I mean interventions from the SWB literature, e.g. gratitude journals and the like. Personally, I’m more optimistic about health and anti-poverty interventions for the purpose of improving happiness.
On “wrong statistical test,” I’m referring to the section called “Older studies used inappropriate statistical methods” in the linked conversation notes with Joel Hektner.
TBC, I think happiness research is worth engaging and has things to teach us, and I think there may be some cost-effectiveness happiness interventions out there. As I said in my original comment, I moved on to other topics not because I think the field is hopeless, but because it was in a bad enough state that it didn’t make sense for me to prioritize it at the time.
I can’t share more detail right now and they might not work out, but just FYI, I’m currently working on the details of Science #5 and Miscellaneous #2.
FWIW, one of my first projects at Open Phil, starting in 2015, was to investigate subjective well-being interventions as a potential focus area. We never published a page on it, but we did publish some conversation notes. We didn’t pursue it further because my initial findings were that there were major problems with the empirical literature, including weakly validated measures, unconvincing intervention studies, one entire literature using the wrong statistical test for decades, etc. I concluded that there might be cost-effective interventions in this space, perhaps especially after better measure validation studies and intervention studies are conducted, but my initial investigation suggested it would take a lot of work for us to get there, so I moved on to other topics.
At least for me, I don’t think this is a case of an EA funder repeatedly ignoring work by e.g. Michael Plant — I think it’s a case of me following the debate over the years and disagreeing on the substance after having familiarized myself with the literature.
That said, I still think some happiness interventions might be cost-effective upon further investigation, and I think our Global Health & Well-Being team has been looking into the topic again as that team has gained more research capacity in the past year or two.
I’m not involved in this program, but I would like to see that happen. Though note that some of the readings are copyrighted.
FWIW I broadly agree with Peter here (more so than the original post).
FWIW the EA forum seems subjectively much better to me than it did ~2 years ago, both in platform and in content, and much of that intuitively seems plausibly traceable to specific labor of the EA forum team. Thanks for all your work!
If you know of work on how AI might cause great power conflict, please let me know
Phrases to look for include “accidental escalation” or “inadvertent escalation” or “strategic stability,” along with “AI” or “machine learning.” Michael Horowitz and Paul Scharre have both written a fair bit on this, e.g. here.
[EA has] largely moved away from explicit expected value calculations and cost-effectiveness analyses.
How so? I hadn’t gotten this sense. Certainly we still do lots of them internally at Open Phil.
Re: cost-effectiveness analyses always turning up positive, perhaps especially in longtermism. FWIW that hasn’t been my experience. Instead, my experience is that every time I investigate the case for some AI-related intervention being worth funding under longtermism, I conclude that it’s nearly as likely to be net-negative as net-positive given our great uncertainty and therefore I end up stuck doing almost entirely “meta” things like creating knowledge and talent pipelines.
- Why does (any particular) AI safety work reduce s-risks more than it increases them? by Oct 3, 2021, 4:55 PM; 48 points) (
- Have any EA nonprofits tried offering staff funding-based compensation? If not, why not? If so, how did it go? by Dec 1, 2021, 3:07 PM; 37 points) (
- What is the state of the art in EA cost-effectiveness modelling? by Jun 4, 2022, 12:08 PM; 20 points) (
- May 4, 2022, 6:49 AM; 17 points) 's comment on Most problems fall within a 100x tractability range (under certain assumptions) by (
- Mar 18, 2024, 6:44 AM; 10 points) 's comment on Ambitious Impact launches a for-profit accelerator instead of building the AI Safety space. Let’s talk about this. by (
- Jun 8, 2022, 6:19 PM; 2 points) 's comment on Transcript of Twitter Discussion on EA from June 2022 by (
FWIW a big thing for Open Phil and a couple other EA-ish orgs I’ve spoken to is that very few lawyers are willing to put probabilities on risks, so they’ll just say “I advise against X,” but what we need is “If you do X then the risk of A is probably 1%-10% and the risk of B is <1% and the risk of C is maybe 1%-5%.” So would be nice you could do some calibration training etc. if you haven’t already.