Previously I’ve been a Research Fellow at the Forethought Foundation, where I worked on What We Owe The Future with Will MacAskill; an Applied Researcher at Founders Pledge; and a Program Analyst for UNDP.
Stephen Clare
Sorry, yep, I meant to add an “annually” there!
I’m somewhat sympathetic to something like GiveDirectly’s take. If bednets are something like 10x more valuable than the cash used to purchase them, I find it a bit weird that people don’t usually buy them when given a cash transfer.
I’ve previously written a short comment about mechanisms that could explain this and do think there are important factors that can explain part of the gap (e.g. coordination problems). But I’m still a bit skeptical that the “real” value is 10x different.
I suppose we could straightforwardly just transfer enough cash to everyone below a certain poverty line until their annual income is above it. The Longview team has estimated this would cost about $258 billion [edit: annually] (pp. 8-10 here).
a lot of my confidence in the above comes from farmed animal welfare strictly dominating GiveWell in terms of any plausibly relevant criteria save for maybe PR
Well some people might have ethical views or moral weights that are extremely favourable to people-focused interventions.
Or people could really value certainty of impact, and the evidence base could lead them to be much more confident that marginal donations to GiveWell charities have a counterfactual impact than marginal donations to animal welfare advocacy orgs.
FWIW I’m more likely to donate to animal welfare orgs too, but I’m sufficiently uncertain that I wouldn’t say I believe they dominate the GW orgs on relevant criteria. That would be pretty surprising, they’re very different in their goals and approach!
Asssume “Philanthropy to the Right-of-Boom” is a roaring success (say, a 95th-percentile good outcome for that report). In a few years, how does the world look different? (Pick any number of years you’d like!)
I first got interested in civilizational collapse and global catastrophic risks by working on a Maya archaeological excavation in Guatemala.
I didn’t know this, and it’s awesome.
What did your work on the Mayans teach you about civilizational collapse?
I’m curious who you’ve seen recommending starting with Mearsheimer? That seems like an unbalanced starting point to me.
I’d personally recommend a textbook, like an older edition of World Politics.
Thanks for writing this. I think a lot of it is pointing at something important. I broadly agree that (1) much of the current AI governance and safety chat too swiftly assumes an us-v-them framing, and that (2) talking about countries as actors obscures a huge amount of complexity and internal debate.
On (2), I think this tendency leads to analysis that assumes more coordination among governments, companies, and individuals in other countries than is warranted. When people talk about “the US” taking some action, readers of this Forum are much more likely to be aware of the nuance this ignores (e.g. that some policy may have emerged from much debate and compromise among different government agencies, political parties, or ideological factions). We’re less likely to consider such nuances when people talk about “China” doing something.
That said, I think your claim that governments don’t influence AI development [via semiconductor progress] is too strong. For example, this sentence:
It seems plain that nations are not currently meaningful players in AI development and deployment, absent conspiracy-level secrecy.
seems likely wrong to me. The phrasing (“it seems plain”) also suggests to me that you should be somewhat less confident in your views on these issues overall.
Some examples of government being meaningful players:
The US government potentially kneecapping Chinese AI development by putting in place harsh export controls on semiconductor products and software
There are large public subsidies for chip companies (potentially up to $150 billion for Chinese semiconductor companies)
More generally, governments will have a lot of influence over domestic companies by deciding what kinds of regulations they’ll be subject to
There are also historical examples of government action shaping outcomes in the semiconductor (and therefore AI space).
For example, early demand for semiconductors was driven by the US government’s military and space program. And TSMC got started when the Taiwanese administration invited Morris Chang to start a semiconductor company in Taiwan (and provided half his start up funding) (source: I read this in Chip War, and that take is summarized on the Wikipedia page).
You write also that “Google and Microsoft really care about each other’s chip access in a way that they only do to a weaker degree about Alibaba’s.” That may be true, I don’t really know. But I’m pretty confident that the US government does care a lot about whether Google or Alibaba have access to more chips. Hence the export controls, subsidies, and regulations discussed above.
I disagree fwiw. The benefits of transparency seem real but ultimately relatively small to me, whereas there could be strong personal reasons for some people to decline to publicise their participation.
More country-specific content could be really interesting. I’d be interested in broad interviews covering:
China—economic projections, expert views and disagreement on stability of CCP, tech progress, info on public opinion about US/West, demographic challenges, entrepreneurship, etc. (not sure he’d be the best person to cover all this, but maybe Kaiser Kuo?)
India—whether high growth rates can be sustained, Sino-Indian relations, complexity of India’s diplomatic relationships with Russia and US, challenges and stability of world’s largest democracy, intra-country variation in culture and economic structure, Indian human capital and tech talent + plausibility of India becoming an AI power in next few decades
Same for other emerging powers—maybe Nigeria and Indonesia
A whole episode on semiconductors and supply chains, including role of countries like South Korea, Japan, and Singapore
Thank you for sharing!
This is a tangent, but I think it’s important to consider predictors’ entire track records, and on the whole I don’t think Mearsheimer’s is very impressive. Here’s a long article on that.
I think this is a ridiculous idea, but the linked article (and headline of this post) is super clickbait-y. This idea is mentioned in two sentences in the court documents (p. 20 of docket 1886, here). All we know is that Gabriel, Sam’s brother, sent a memo to someone at the FTX Foundation mentioning the idea. We have no idea if Sam even heard about this or if anyone at the Foundation “wanted” to follow through with it. I’m sure all sorts of wild possibilities got discussed around that time. Based on the evidence it’s a a huge leap to say there were desires or plans to act on them.
Thanks for this! I agree interventions in this direction would be worth looking into more, though I’d also say that tractability remains a major concern. I’m also just really uncertain about the long-term effects.
I think the Quincy Institute is interesting but want to note that it’s also very controversial. Seems like they can be inflammatory and dogmatic about restraint policies. From an outside perspective I found it hard to evaluate the sign of their impact, much less its magnitude. I don’t think I’d recommend 80K put them on the job board right now.
Thanks for catching that, you’re absolutely right. That should either read about 100,000 deaths or hundreds of thousands of casualties. I’ll get that fixed.
I can certainly empathize with the longtermist EA community being hard to ignore. It’s much flashier and more controversial.
For what it’s worth I think it would be possible and totally reasonable for you to filter out longtermist (and animal welfare, and community-building, etc.) EA content and just focus on the randomista stuff you find interesting and inspiring. You could continue following GiveWell, Founders Pledge’s global health and development work, and HLI. Plus, many of Charity Entrepreneurship’s charities are randomista-influenced.
For example, I make heavy use of the unsubscribe feature on the Forum to try and keep my attention focused on the issues I care about rather than what’s most popular (ironically I’m unsubscribed and supposed to be ignoring the ‘Community’ feed lol).
I agree with you about SNT/ITN. I like that chapter of your thesis a lot, and also find John’s post here convincing.
It does seem to me that randomista EA is alive and largely well—GW is still growing, global health still gets the most funding (I think), many of Charity Entrepreneurship’s new charities are randomista-influenced, etc.
There’s a lot of things going on under the “EA” umbrella. HLI’s work feels very different from what other EAs do, but equally a typical animal welfare org’s work will feel very different, and a typical longtermist org’s work will feel very different, because other EAs do a lot of different things now.
Just curious—do you not feel like GiveWell, Happier Lives Institute, and some of Founders Pledge’s work, for example, count as randomista-flavoured EA?
On (1), I commented above, but most supplemental creatine is vegan as far as I can tell.
Great comment, I appreciate this perspective and have definitely updated towards thinking the 10x gap is more explainable than I thought.
I do note that some of the examples you gave still leave me wondering if the families would rather just have the cash. Sure, perhaps it would be spent on high-priority and perhaps social signal-y things like weddings. But if they can’t currently afford to send all their kids to school or other medical treatment, I wonder if they’d sensibly rather have the cash to spend on those things than a bednet.
(Also, my understanding from surveys of cash recipients is that most do spend their money on essentials or investments.)