I’m a globally ranked top 20 forecaster. I believe that AI is not a normal technology. I’m working to help shape AI for global prosperity and human freedom. Previously, I was a former data scientist with five years of industry experience.
Peter Wildeford
Hi David—I work a lot on semiconductor/chip export policy, so very important to think about the strategy here.
My biggest issue is that “short vs. long” timelines is not a binary. I agree that under longer timelines, say post-2035, China likely can significantly catch up on chip manufacturing. (Seems much less likely pre-2035.) But I think the controls logic matters really strongly for timelines 2025-2035 and still might create a larger strategic advantage post-2035.
Who has the chips still matters, since it determines whether the country has enough compute to train their own models, run any models, and provision cloud providers. You treat “differential adoption” and “who owns chips” as separate when they’re deeply interconnected. If you control chip supply, you inherently influence adoption patterns. There would be diffusion of AI of course, but it would be much more likely to come from the US given chip controls, and potentially the AI would remain on US cloud under US control.
Furthermore, if you grant that AI can accelerate AI development itself, a 2-3 year compute advantage could be decisive… and not just in “fast take-off recursive self-improvement” but even in mundane ways where better AI leads to better chip design tools, better compiler optimization better datacenter cooling systems, and better materials science for next-gen chips.
You’re right that it is impossible to control 100% of the chips, but that’s not the goal. The goal is to control enough of the chips enough of the time to create a structural advantage. Maintaining a 10-to-1 compute advantage of the US over China will mean that even if we had AI parity, we’d still have 10x more AI agents than China. And we’d likely have better AI per agent as well.
For example, consider the same Russian oil example you discuss—yes, there’s significant leakage to India and China and these controls aren’t perfect, but Russia’s realized prices have stayed ~$15-20/barrel below Brent throughout 2024-2025 - forcing Russia to accept steep discounts while burning cash on shadow fleet operations and longer shipping routes.
And chips are much easier to control than oil right now. Currently, OpenAI can buy one million NVIDIA GB300s to power Stargate, but China and Russia can’t even come close. Chinese chips are currently much weaker in both quantity and quality, and this will persist for awhile as China lacks the relevant chipmaking equipment and likely will for some time—the EUV tech that prints chips at nanometer scale took decades to develop and is arguably the most advanced technology ever made. You seem to have some all-or-nothing thinking here or think that we can’t possibly block enough chips to matter, but we already have significantly reduced China’s compute stock and you even have people like DeepSeek’s CEO mentioning that chip controls are their biggest barrier. Chinese AI development would certainly be different if China could freely buy one million GB300s as well.
The key thing is that semiconductor manufacturing isn’t a commodity market with fungible goods flowing to equilibrium. You’re treating this as a standard economic problem where market forces inevitably equalize and assume a lot of frictionless markets—but neither of these seem true. The chip supply chain has unique characteristics with extreme manufacturing concentration, decades-long development cycles, and tacit knowledge that make it different. Additionally, network effects in AI development could create lock-in before economic pressure equalizes access. Moreover, American/Western AI and chip development isn’t going to freely flow to China because the US government would continue to stop that from happening as a matter of national security. Capital does flow, but this technology cannot flow quickly, freely, or easily.
It’s also not easy to just arbitrarily make up for chip disadvantage with energy advantage. It’s very difficult to train frontier AI models on ancient hardware. DeepSeek has been trying hard all year to train their model on Huawei chips and still haven’t succeeded. It doesn’t matter how cheap you make energy if chips remain a limiting factor. Arguably, TSMC’s lead over SMIC has grown, not shrunk, over the past decade despite massive Chinese investment.
All told, I think that China is at a significant AI disadvantage over the next decade or more and this is due to reasonably effective (albeit imperfect) chip controls. Ideally we would make the chip controls even better and stronger to press that advantage further (I have ideas on how), but that’s a different conversation from the strategic wisdom of the controls in the first place.
Congrats! I also thought it was great.
Sorry for the slightly off-topic question but I noticed EAG London 2025 talks are uploaded to YouTube but I didn’t see any EAG Bay Area 2025 talks. Do you know when those will go up?
I still stand by the book and I attribute a lot of my historical failures in management to not implementing this book well enough (especially the part about creating clarity around goals).
If you’re considering a career in AI policy, now is an especially good time to start applying widely as there’s a lot of hiring going on right now. I documented in my Substack over a dozen different opportunities that I think are very promising.
Thank you for sharing your perspective and I’m sorry this has been frustrating for you and people you know. I deeply appreciate your commitment and perseverance.
I hope to share you a bit of perspective from me as a hiring manager on the other side of things:
Why aren’t orgs leaning harder on shared talent pools (e.g. HIP’s database) to bypass public rounds? HIP is currently running an open search.
It’s very difficult to run an open search for all conceivable jobs and have the best fit for all of them. And even if you do have a list of the top candidates for everything, it’s still hard to sort and filter through that list without more screening. This makes HIP a valuable supplement but not a replacement.
~
I also think it would be worth considering how to provide some sort of job security/benefit for proven commitment within the movement
‘The movement’ is just the mix of all the people and orgs doing their own thing. Individual orgs themselves should be responsible for job security and rewarding commitment—the movement itself unfortunately isn’t an entity that is capable of doing that.
~
I know one lady who worked at a top EA org for eight years; she’s now struggling to find her next position within the movement, competing with new applicants! That seems like a waste of career capital.
Hopefully her eight years gives her a benefit against other applicants! That is, the career capital hasn’t been ‘wasted’ at all. But it still makes sense to view her against other applicants who may have other skills needed for the role—being good at one role doesn’t make you a perfect automatic fit for another role.
~
Moreover, I would avoid the expensive undertaking of a full hiring round until my professional networks had been exhausted. After all, if you’re in my network to begin with, you probably did something meritorious to get there.
While personal networks are a great place to source talent they’re far from perfect—in particular while personal networks are created by merit they are also formed by bias and preferencing ‘people like us’. A ‘full hiring round’ is thus more meritocratic—anyone can apply, you don’t need to figure out how to get into the right person’s network first.
~
You might like this article: Don’t be bycatch.
I think you should get the LLM to give you the citation and then cite that (ideally after checking it yourself).
At least in my own normative thought, I don’t just wonder about what meets my standards. [...] I think the most important disagreement of all is over which standards are really warranted.
Really warranted by what? I think I’m an illusionist about this in particular as I don’t even know what we could be reasonably disagreeing over.
For a disagreement about facts (is this blue?), we can argue about actual blueness (measurable) or we can argue about epistemics (which strategies most reliably predict the world?) and meta-epistemics (which strategies most reliably figure out strategies that reliably predict the world?), etc.
For disagreements about morals (is this good?), we can argue about goodness but what is goodness? Is it platonic? Is it grounded in God? I’m not even sure what there is to dispute. I’d argue the best we can do is argue to our shared values (perhaps even universal human values, perhaps idealized by arguing about consistency etc.) and then see what best satisfies those.
~
On your view, there may not be any normative disagreement, once we all agree about the logical and empirical facts.
Right—and this matches our experience! When moral disagreements persist after full empirical and logical agreement, we’re left with clashing bedrock intuitions. You want to insist there’s still a fact about who’s ultimately correct, but can’t explain what would make it true.
~
It’s interesting to consider the meta question of whether one of us is really right about our present metaethical dispute, or whether all you can say is that your position follows from your epistemic standards and mine follows from mine, and there is no further objective question about which we even disagree.
I think we’re successfully engaging in a dispute here and that does kind of prove my position. I’m trying to argue that you’re appealing to something that just doesn’t exist and that this is inconsistent with your epistemic values. Whether one can ground a judgement about what is “really warranted” is a factual question.
~
I want to add that your recent post on meta-metaethical realism also reinforces my point here. You worry that anti-realism about morality commits us to anti-realism about philosophy generally. But there’s a crucial disanalogy: philosophical discourse (including this debate) works precisely because we share epistemic standards—logical consistency, explanatory power, and various other virtues. When we debate meta-ethics or meta-epistemology, we’re not searching for stance-independent truths but rather working out what follows from our shared epistemic commitments.
The “companions in guilt” argument fails because epistemic norms are self-vindicating in a way moral norms aren’t. To even engage in rational discourse about what’s true (including about anti-realism), we must employ epistemic standards. But we can coherently describe worlds with radically different moral standards. There’s no pragmatic incoherence in moral anti-realism the way there would be in global philosophical anti-realism.
You’re right that I need to bite the bullet on epistemic norms too and I do think that’s a highly effective reply. But at the end of the day, yes, I think “reasonable” in epistemology is also implicitly goal-relative in a meta-ethical sense—it means “in order to have beliefs that accurately track reality.” The difference is that this goal is so universally shared, so universal across so many different value systems, and so deeply embedded in the concept of belief itself that it feels categorical.
You say I’ve “replaced all the important moral questions with trivial logical ones,” but that’s unfair. The questions remain very substantive—they just need proper framing:
Instead of “Which view is better justified?” we ask “Which view better satisfies [specific criteria like internal consistency, explanatory power, alignment with considered judgments, etc.]?”
Instead of “Would the experience machine be good for me?” we ask “Would it satisfy my actual values / promote my flourishing / give me what I reflectively endorse / give me what an idealized version of myself might want?”
These aren’t trivial questions! They’re complex empirical and philosophical questions. What I’m denying is that there’s some further question—“But which view is really justified?”—floating free of any standard of justification.
Your challenge about moral uncertainty is interesting, but I’d say: yes, you can hedge across different moral theories if you have a higher-order standard for managing that uncertainty (like maximizing expected moral value across theories you find plausible). That’s still goal-relative, just at a meta-level.
The key insight remains: every “should” or “justified” implicitly references some standard. Making those standards explicit clarifies rather than trivializes our discussions. We’re not eliminating important questions—we’re revealing what we’re actually asking.
You raise a fair challenge about epistemic norms! Yes, I do think there are facts about which beliefs are most reasonable given evidence. But I’d argue this actually supports my view rather than undermining it.
The key difference: epistemic norms have a built-in goal—accurate representation of reality. When we ask “should I expect emeralds to be green or grue?” we’re implicitly asking “in order to have beliefs that accurately track reality, what should I expect?” The standard is baked into the enterprise of belief formation itself.
But moral norms lack this inherent goal. When you say some goals are “intrinsically more rationally warranted,” I’d ask: warranted for what purpose? The hypothetical imperative lurks even in your formulation. Yes, promoting happiness over misery feels obviously correct to us—but that’s because we’re humans with particular values, not because we’ve discovered some goal-independent truth.
I’m not embracing radical skepticism or saying moral questions are nonsense. I’m making a more modest claim: moral questions make perfect sense once we specify the evaluative standard. “Is X wrong according to utilitarianism?” has a determinate, objective, mind-indpendent answer. “Is X wrong simpliciter?” does not.
The fact that we share deep moral intuitions (like preferring happiness to misery) is explained by our shared humanity, not by those intuitions tracking mind-independent moral facts. After all, we could imagine beings with very different value systems who would find our intuitions as arbitrary as we might find theirs.
So yes, I think we can know things about the future and have justified beliefs. But that’s because “justified” in epistemology means “likely to be true”—there’s an implicit standard. In ethics, we need to make our standards explicit.
Thanks!
I think all reasons are hypothetical, but some hypotheticals (like “if you want to avoid unnecessary suffering...”) are so deeply embedded in human psychology that they feel categorical. This explains our moral intuitions without mysterious metaphysical facts.
The concentration camp guard example actually supports my view—we think the guard shouldn’t follow professional norms precisely because we’re applying a different value system (human welfare over rule-following). There’s no view from nowhere; there’s just the fact that (luckily) most of us share similar core values.
You were negative toward the idea of hypothetical imperatives elsewhere but I don’t see how you get around the need for them.
You say epistemic and moral obligations work “in the same way,” but they don’t. Yes, we have epistemic obligations to believe true things… in order to have accurate beliefs about reality. That’s a specific goal. But you can’t just assert “some things are good and worth desiring” without specifying… good according to what standard? The existence of epistemic standards doesn’t prove there’s One True Moral Standard any more than the existence of chess rules prove there’s One True Game.
For morality, there are facts about which actions would best satisfy different value systems. I consider those to be a form of objective moral facts. And if you have those value systems, I think it is thus rationally warranted to desire those outcomes and pursue those actions. But I don’t know how you would get facts about which value system to have without appealing to a higher-order value system.
Far from undermining inquiry, this view improves it by forcing explicitness about our goals. When you feel “promoting happiness is obviously better than promoting misery,” that doesn’t strike me as metaphysical truth but expressive assertivism. Real inquiry means examining why we value what we value and how to get it.
I’m far from a professional philosopher and I know you have deeply studied this much more than I have, so I don’t mean to accuse you of being naive. Looking forward to learning more.
“Nihilism” sounds bad but I think it’s smuggling in connotations I don’t endorse.
I’m far from a professional philosopher but I don’t see how you could possibly make substantive claims about desirability from a pure meta-ethical perspective. But you definitely can make substantive claims about desirability from a social perspective and personal perspective. The reason we don’t debate racist normative advice is because we’re not racists. I don’t see any other way to determine this.
Morality is Objective
People keep forgetting that meta-ethics was solved back in 2013.
I recently made a forecast based on the METR paper with median 2030 timelines and much less probability on 2027 (<10%). I think this forecast of mine is weaker to much fewer of titotal’s critiques, but still weak to some (especially not having sufficient uncertainty around the type of curve to fit).
p(doom) is about doom. For AI, I think this can mean a few things:
-
Literal human extinction
-
Humans lose power over their future but are still alive (and potentially even have nice lives), either via stable totalitarianism or gradual disempowerment or other means
The second bucket is pretty big
-
What do the superforecasters say? Well, the most comprehensive effort to ascertain and influence superforecaster opinions on AI risk was the Forecasting Research Institute’s Roots of Disagreement Study.[2] In this study, they found that nearly all of the superforecasters fell into the “AI skeptic” category, with an average P(doom) of just 0.12%. If you’re tempted to say that their number is only so low because they’re ignorant or haven’t taken the time to fully understand the arguments for AI risk, then you’d be wrong; the 0.12% figure was obtained after having months of discussions with AI safety advocates, who presented their best arguments for believing in AI x-risks.
I see this a bunch but I think this study is routinely misinterpreted. I have some knowledge from having participated in it.
The question being posed to forecasters was about literal human extinction, which is pretty different from how I see
p(doom)
be interpreted. A lot of the “AI skeptics” were very sympathetic to AI being the biggest deal, but just didn’t see literal extinction as that likely. I also have moderate p(doom) (20%-30%) while thinking literal extinction is much lower than that (<5%).Also the study ran 2023 April 1 to May 31, which was just right after the release of GPT-4. Since then there’s been so much more development. My guess is if you polled the “AI skeptics” now, the p(doom) will have gone up.
I just saw that Season 3 Episode 9 of Leverage: Redemption (“The Poltergeist Job”) that came out on 2025 May 29 has an unfortunately very unflattering portrayal of “effective altruism”.
Matt claims he’s all about effective altruism. That it’s actually helpful for Futurilogic to rake in billions so that there’s more money to give back to the world. They’re about to launch Galactica. That’s free global Internet.
[...] But about 50% of the investments in Galactica are from anonymous crypto, so we all know what that means.
The main antagonist and CEO of Futurilogic, Matt, uses EA to justify horrific actions, including allowing firefighters to be injured when his company’s algorithm throttles cell service during emergencies. He also literally murders people while claiming it’s for the greater good. And if that’s not enough, he’s also laundering money for North Korea through crypto investments!
Why would he do this? He explicitly invokes utilitarian reasoning (“Trolley Theory 101”) to dismiss harm caused:
When I started this company, I started it on the idea that if we could make enough money, we could make the entire world a better place, guys. All of it. Sometimes, in order to make something like that happen, huge sacrifices are required. Sacrifices like Josh. Or sacrifices like the firefighters. But that’s Trolley Theory 101, guys. Yeah. I don’t have any regrets. Not one.
And when wielding an axe to kill someone, Matt says: “This is altruism, Skylar! Whatever I need to do to save the world.”
But what’s his cause area? Something about ending “global hunger and homelessness” through free internet access. Matt never articulates any real theory of change beyond “make money (and do crimes) → launch free internet → somehow save world.”
And of course the show depicts the EA tech executives at Futurilogic as being in a “polycule” with a “hive mind” mentality.
Bummer.
AGI by 2028 is more likely than not
I don’t think this is as clear of a dichotomy as people think it is. A lot of global catastrophic risk doesn’t come from literal extinction because human extinction is very hard. A lot of mundane work on GCR policy involves a wide variety of threat models that are not just extinction.
3 votes
Overall karma indicates overall quality.
Total points: 1
Agreement karma indicates agreement, separate from overall quality.
You might like “A Model Estimating the Value of Research Influencing Funders” which makes a similar point, but quantitatively