Anything I write here is written purely on my own behalf, and does not represent my employerâs views (unless otherwise noted).
Erich_Grunewald đ¸
(To be clear, I do think many of these charities do some good and are run with the best of intentions, etc. But I still also stand by the statement in the parent comment.)
That is the most PR-optimized list of donations I have ever seen in my life.
And also to include the link? Or maybe Iâm too dumb to see it.
Thanks for sharing this. I did an Erasmus exchange year in Italy in 2010-11 that was very important for my personal growth, although it was not particularly beneficial professionally or academically.
Nice work!
On AI chip smuggling, rather than the report you listed, which is rather outdated now, I recommend reading Countering AI Chip Smuggling Has Become a National Security Priority, which is essentially a Pareto improvement over the older one.
I also think Chris Millerâs How US Export Controls Have (and Havenât) Curbed Chinese AI provides a good overview of the AI chip export controls, and it is still quite up-to-date.
On timelines, I think itâs worth separating out export controls on different items:
Controls on AI chips themselves start having effects on AI systems within a year or so probably (say 6-12 months to procure and install the chips, and 6-18 months to develop/âtrain/âpost-train a model with them), or even sooner for deployment/âinference, i.e. 1-2 years or so.
Controls on semiconductor manufacturing equipment (SME) take longer to have an impact as you say, but I think not that long. SMIC (and therefore future Ascend GPUs) is clearly limited by the 2019 ban on EUV photolithography, and I would say this was apparent as early as 2023. So I think SME controls instituted now would start having an effect on chip production in the late 2020s already, and on AI systems 1-2 years after that.
Most other relevant products (e.g., HBM and EDA software) probably fall between those two in terms of how quickly controls affect downstream AI systems.
So that means policy changes in 2025 could start affecting Chinese AI models in 2027 (for chips) and around 2030 (for SME) already, which seems relevant to even short-timeline worlds. For example, Daniel Kokotajloâs median for superhuman coders is now 2029, and IIUC Eli Liflandâs median is in the (early?) 2030s.
But I would go further to say that export controls now can substantially affect compute access well into the 2030s or even the 2040s. You write that
the technical barriers [to Chinese indigenization of leading-edge chip fabrication] are higher today, but not so high that intense Chinese investment canât dent it over the course of a decade. SMEE is investing in laser-induced discharge plasma tech, with rumored trial production as soon as the end of this year. SMIC is using DUV more efficiently for (lower-yield, but still effective) chip production. Thereâs also work on Nanoimprint lithography, immersion lithography, packaging, etc. And that wonât affect market shares, until it does.
I wonât have time to go into great detail here, but I have researched this a fair amount and I think you are too bullish on Chinese leading-edge chip fabrication. To be clear, China can and will certainly produce AI chips, and these are decent AI chips. But they will likely produce those chips less cost-efficiently and at lower volumes due to having worse equipment, and they will have worse performance than TSMC-fabbed chips due to using older-generation processes. The lack of EUV machines, which will likely last at least another five years and plausibly well into the 2030s, alone is a very significant constraint.
On SMEE and SMIC in particularâyou write:
SMEE is investing in laser-induced discharge plasma tech, with rumored trial production as soon as the end of this year.
SMEE was established 23 years ago to produce indigenous lithography, and 23 years later it still has essentially no market share, and it still has not produced an immersion DUV machine, let alone an EUV machine, which is far more difficult. I would not be surprised if, when the indigenous Chinese immersion DUV machine does finally arrive, it is a SiCarrier (or subsidiary) product and not an SMEE product.
SMIC is using DUV more efficiently for (lower-yield, but still effective) chip production.
In what sense do you mean SMIC is using DUV more efficiently? It is using immersion DUV multi-patterning (with ASML machines) to compensate for its lack of EUV machines. But as you note this means worse yield and lower throughput. I donât see any sense in which SMIC is using DUV more efficiently; itâs just using it more, in order to get around a constraint that TSMC doesnât have. In any case, multi-patterning with immersion DUV can only take you so far; thereâs likely a hard stop around whatâs vaguely called 2 nm or 1.4 nm process nodes, even if you do multi-patterning perfectly. (For reference, TSMC is starting mass production on its â2 nmâ process this year.)
On the oil analogy, it seems from
The long-term winners were definitely not the groups that extracted or refined the oil, even though they made lots of moneyâit was the countries that consumed the oil and built industrial capacity leading up to WWII, and could then use the controlled supply of oil. ⌠And as far as I can tell, no-one is restricting Chinese companies from using compute right nowâthey donât own it, but can use the same LLMs I do.
that you think ownership of compute does not substantially influence who will have or control the most powerful AI systems? I disagree; I think it will impact both AI developers and also companies relying on access to AI models. First, AI developersâexport controls put the Chinese AI industry as a whole at a compute disadvantage, which we see in the fact that they train less compute-intensive models, for a few reasons:
It is generally unappealing for major AI developers to merely rent GPUs they donât own, as a result of which they often build their own data centers (xAI, Google) or rely on partnerships for exclusive access (OpenAI, Anthropic). I think the main reasons for this are cost, (un)certainty, and greater control over the cluster set-up.
Chinese companies cannot build their own data centers with export-controlled chips without smuggling, and cannot embark on these partnerships with American hyperscalers. If they want to use cutting-edge GPUs, they must either rely on smuggling (which means higher prices and smaller quantities), or renting from foreign cloud providers.
The US likely could, if and when it wanted to, deny access of compute via the cloud to Chinese customers, at least large-scale use and at least for the large hyperscalers. So for Chinese AI developers to rely on foreign cloud compute gives the US a lot of leverage. (There are some questions around how feasible it is to circumvent KYC checks, and especially whether the US can effectively ensure these checks are done well in third countries, but I think the US could deny China most of the worldâs rentable cloud compute in this way.)
Chinese privacy law makes it harder for Chinese AI developers to use foreign cloud compute, at least for some use cases. Iâm not sure exactly how strong this effect is, but it seems non-negligible.
For deployment/âinference, you may want to have your compute located close to your users, as that reduces latency.
In the event of an actual conflict over or involving AI, you can seize compute located on territory you control. I hope that doesnât happen obviously, but itâs definitely a reason why as an AI developer youâd prefer to use compute located in your own country, than located in a rival country or one of the rivalâs allies or partners.
Thatâs AI developers. As for the AI industry more broadly, there are barriers for Chinese companies wanting to use US models like ChatGPT or Claude, which, for example, is likely one reason why Manus moved to Singapore. So the current disparity in who owns compute and where it is located means Chinese AI developers are relatively compute-poor, and since Chinese companies rely substantially on domestic Chinese models, it seems to me like the entire Chinese AI industry is impacted by these restrictions.
Also, I disagree that oil âonly mattered because it enabled economic developmentâ. In WWII especially, oil was necessary for fuel-hungry militaries to function. I think AI will also be militarily important even ignoring its effects on economic development, though maybe less so than oil.
On the other hand, I think youâre wrong in saying that âthe chip supply chain has unique characteristics [compared to oil,] with extreme manufacturing concentration, decades-long development cycles, and tacit knowledge that make it differentââbecause the same is true for crude oil extraction! What matters is who refines it, and who buys it, and what itâs used for.
I think the technical barriers to developing EUV photolithography from scratch are far higher than anything needed to extract, refine, or transport oil. I also think the market concentration is far higher in the AI chip design and semiconductor industries. Thereâs no oil equivalent to TSMCâs ~90% leading-edge logic chip, NVIDIAâs ~90% data center GPU, or ASMLâs 100% EUVL machine market shares.
Second, if weâre talking about takeoff after 2035, the investments in China are going to swamp western production. (This is the command economy advantageâthough I could imagine itâs vulnerable to the typical failure modes where they overinvest in the wrong thing, and canât change course quickly.)
Are you sure? I would guess that the chip supply chain used by NVIDIA has more investment than the Chinese counterpart. For example, according to a SEMI report, China will spend $38bn on semiconductor manufacturing equipment in 2025, whereas the US + Taiwan + South Korea + Japan is set to spend a combined ~$70bn. I would guess it looks directionally similar for R&D investment, though the difference may be smaller there.
For moderately short, 2-6 year timelines, the timelines for chip fabs are long enough that weâre mostly locked in not just to overall western dominance via chips produced in Taiwan, but because fabrication plans built today are coming online closer to 2029, and the rush to build Chinese fabrication plants is already baked in. And thatâs just the fabsâfor the top chips, the actual chip design usually takes as long or longer than building the plant.
I was under the impression the AI chip design process is more like 1.5-2 years, and a fab is built in 2-3 years in Taiwan or 4 years for the Arizona fab. It sounds like you think differently? Whatever it is, I would guess itâs roughly similar across the industry, including in China. That seems like, if my numbers are right, it leaves enough room for policy now to influence the relative compute distribution of nations 5-6 years from now.
Interesting!
While the small body size of sardines and anchovies means that many individuals must be killed to produce a given amount of food, thereby scaling up the moral weight, a meaningful moral cost calculation should extend beyond these direct first-order consequences to account for indirect higher-order consequences, especially given that all food production invariably involves some level of collateral damage, as will be discussed further on.
On the other hand, sardines really are very small, and I reckon youâd need on the order of 100x as many sardines as youâd need salmons to get the same amount of calories. I wonder how many small animals would die to produce the amount of calories of plant-based food youâd get from a sardine? Iâd guess <<0.1, but Iâd be interested in seeing estimates here as it seems pretty cruxy.
As for traitor, I think the only group here that can be betrayed is humanity as a whole, so as long as one believes theyâre doing something good for humanity I donât think itâd ever apply.
Hmm, that seems off to me? Unless you mean âsevere disloyalty to some group isnât Ultimately Bad, even though it can be instrumentally badâ. But to me it seems useful to have a concept of group betrayal, and to consider doing so to be generally bad, since I think group loyalty is often a useful norm thatâs good for humanity as a whole.
Specifically, I think group-specific trust networks are instrumentally useful for cooperating to increase human welfare. For example, scientific research canât be carried out effectively without some amount of trust among researchers, and between researchers and the public, etc. And you need some boundary for these groups thatâs much smaller than all humanity to enable repeated interaction, mutual monitoring, and norm enforcement. When someone is severely disloyal to one of those groups they belong to, they undermine the mutual trust that enables future cooperation, which Iâd guess is ultimately often bad for the world, since humanity as a whole depends for its welfare on countless such specialised (and overlapping) communities cooperating internally.
Iâm obviously not Matthew, but the OED defines them like so:
sell-out: âa betrayal of oneâs principles for reasons of expedienceâ
traitor: âa person who betrays [be gravely disloyal to] someone or something, such as a friend, cause, or principleâ
Unless he is lying about what he believesâwhich seems unlikelyâMatthew is not a sell-out, because according to him Mechanize is good or at minimum not bad for the world on his worldview. Hence, he is not betraying his own principles.
As for being a traitor, I guess the first question is, traitor of what? To EA principles? To the AI safety cause? To the EA or AI safety community? In order:
I donât think Matthew is gravely disloyal to EA principles, as he explicitly says he endorses them and has explained how his decisions make sense on his worldview
I donât think Matthew is gravely disloyal to the AI safety cause, as heâs been openly critical of many common AI doom arguments for some time, and you canât be disloyal to a cause you never really bought into in the first place
Whether Matthew is gravely disloyal to the EA or AI safety communities feels less obvious to me. Iâm guessing a bunch of people saw Epoch as an an AI safety organisation, and by extension its employees as members of the AI safety community, even if the org and its employees did not necessarily see itself or themselves that way, and felt betrayed for that reason. But it still feels off to me to call Matthew a traitor to the EA or AI safety communities, especially given that heâs been critical of common AI doom arguments. This feels more like a difference over empirical beliefs than a difference over fundamental values, and it seems wrong to me to call someone gravely disloyal to a community for drawing unorthodox but reasonable empirical conclusions and acting on those, while broadly having similar values. Like, I think people should be allowed to draw conclusions (or even change their minds) based on evidenceâand act on those conclusionsâwithout it being betrayal, assuming they broadly share the core EA values, and assuming theyâre being thoughtful about it.
(Of course, itâs still possible that Mechanize is a net-negative for the world, even if Matthew personally is not a sell-out or a traitor or any other such thing.)
This is weird because other sources do point towards a productivity gap. For example, this report concludes that âEuropean productivity has experienced a marked deceleration since the 1970s, with the productivity gap between the Euro area and the United States widening significantly since 1995, a trend further intensified by the COVID-19 pandemicâ.
Specifically, it looks as if, since 1995, the GDP per capita gap between the US and the eurozone has remained very similar, but this is due to a widening productivity gap being cancelled out by a shrinking employment rate gap:
This report from Banque de France has it that âthe EU-US gap has narrowed in terms of hours worked per capita but has widened in terms of GDP per hours workedâ, and that in France at least this can be attributed to âproducers and heavy users of IT technologiesâ:
The Draghi report says 72% of the EU-US GDP per capita gap is due to productivity, and only 28% is due to labour hours:
Part of the discrepancy may be that the OWID data only goes until 2019, whereas some of these other sources report that the gap has widened significantly since COVID? But that doesnât seem to be the case in the first plot above (it still shows a widening gap before COVID).
Or maybe most of the difference is due to comparing the US to France/âGermany, versus also including countries like Greece and Italy that have seen much slower productivity growth. But that doesnât explain the France data above (it still shows a gap between France and the US, even before COVID).
Iâm registering a forecast: Within a few months weâll see a new Vasco Grilo post BOTECing that insecticide-treated bednets are net-negative expected value due to mosquito welfare. Looking forward to it. :)
He reframes EA concepts in a more accessible way, such as replacing âcounterfactualsâ with the sports acronym âVORPâ (Value Over Replacement Player).
And here I was thinking hardly a soul read my suggesting this framing âŚ
Thanks for writing this, itâs very interesting.
Instead, I might describe myself as a preferentialist or subjectivist about what matters, so that whatâs better is just whatâs preferred, or what would be better according to our preferences, attitudes or ways of caring, in general.
This sounds similar to Christine Korsgaardâs (Kantian) view on value, where things only matter because they matter to sentient beings (people, to Kant). I think I was primed to notice this because I remember you had some great comments on my interview with her from four years ago.
Quoting her:
Utilitarians think that the value of people and animals derives from the value of the states they are capable of â pleasure and pain, satisfaction and frustration. In fact, in a way it is worse: In utilitarianism, people and animals donât really matter at all; they are just the place where the valuable things happen. Thatâs why the boundaries between them do not matter. Kantians think that the value of the states derives from the value of the people and animals. In a Kantian theory, your pleasures and pains matter because you matter, you are an âend in yourselfâ and your pains and pleasures matter to you.
I guess âutilitarianismâ above could be replaced with âhedonismâ etc. and it would sort of match your writing that hedonism etc. is âguilty [...] of valuing things in ways that donât match how we care about thingsâ. Anyway, she discusses this view in much greater detail in Fellow Creatures.
See also St. Jules, 2024 and Roelofs, 2022 (pdf) for more on ways of caring and moral patienthood, using different terminology.
Fyi, the latter two of these links are broken.
Thanks!
The correct âmoral fixâ isnât âdonât get mail,â itâs âdonât kick dogs.â Do you share this intuition of non-responsibility?
Iâm also not a philosopher, but I guess it depends on what your options are. If your only way of influencing the situation is by choosing whether or not to get mail, and the dog-kicking is entirely predictable, you have to factor the dog-kicking into the decision. Of course the mailman is ultimately much more responsible for the dog kicking than you are, in the sense that your action is one you typically wouldnât expect to cause any harm, whereas his action will always predictably cause harm. (In the real world, obviously there are likely many ways of getting the mailman to stop kicking dogs that are better than giving up mail.)
Iâm not sure whether it makes sense to think of blameworthy actions as wrong by definition. It probably makes more sense to tie blameworthiness to intentions, and in that case an action could be blameworthy even though it has good consequences, and even though endorsing it leads to good consequences. Anyway, if so, obviously the mailman is also much more blameworthy than you, given that he presumably had ill intentions when kicking the dog, whereas you had no ill intentions when getting your mail delivered.
Not a Meat Eater FAQ
To clarify, I think Iâm ok with having a taboo on advocacy against âit is better for the world for innocent group X of people not to existâ, since that seems like the kind of naive utilitarianism we should definitely avoid. Iâm just against a taboo on asking or trying to better understand whether âit is better for the world for innocent group X of people not to existâ is true or not. I donât think Vasco was engaging in advocacy, my impression was that he was trying to do the latter, while expressing a lot of uncertainty.
Thanks, that is a useful distinction. Although I would guess Vasco would prefer to frame the theory of impact as âfind out whether donating to GiveWell is net positive â help people make donation choices that promote welfare betterâ or something like that. I buy @Richard Y Chappellđ¸âs take that it is really bad to discourage others from effective giving (at least when itâs done carelessly/ânegligently), but imo Vasco was not setting out to discourage effective giving, or it doesnât seem like that to me. He isâIâm guessingâcooperatively seeking to help effective givers and others make choices that better promote welfare, which they are presumably interested in doing.
There are obviously some cruxes hereâincluding whether there is a moral difference between actively advocating for others not to hand out bednets vs. passively choosing to donate elsewhere /â spend on oneself, and whether there is a moral difference between a bad thing being part of the intended MoA vs. a side effect. I would answer yes to both, but I have lower consequentialist representation in my moral parliament than many people here.
Yes, I personally lean towards thinking the act-omission difference doesnât matter (except maybe as a useful heuristic sometimes).
As for whether the harm to humans is incidental-but-necessary or part-of-the-mechanism-and-necessary, Iâm not sure what difference it makes if the outcomes are identical? Maybe the difference is that, when the harm to humans is part-of-the-mechanism-and-necessary, you may suspect that itâs indicative of a bad moral attitude. But I think the attitude behind âI wonât donate to save lives because I think it creates a lot of animal sufferingâ is clearly better (since it is concerned with promoting welfare) than the attitude behind âI wonât donate to save lives because I prefer to have more income for myselfâ (which is not).
Even if one would answer no to both cruxes, I submit that âno endorsing MoAs that involve the death of innocent peopleâ is an important set of side rails for the EA movement. I think advocacy that saving the lives of children is net-negative is outside of those rails. For those who might not agree, Iâm curious where they would put the rails (or whether they disagree with the idea that there should be rails).
I do not think it is good to create taboos around this question. Like, does that mean we shouldnât post anything that can be construed as concluding that itâs net harmful to donate to GiveWell charities? If so, that would make it much harder to criticise GiveWell and find out what the truth is. What if donating to GiveWell charities really is harmful? Shouldnât we want to know and find out?
For what itâs worth, I would guess that though the âfunnessâ of AI safety research, or maybe especially technical AI safety research, is probably a factor in determining how many people are interested in working on that, I would be surprised if itâs a factor in determining how much money is allocated towards that as a field.