Anything I write here is written purely on my own behalf, and does not represent my employerâs views (unless otherwise noted).
Erich_Grunewald đ¸
ďValue capture
It would be helpful if you mentioned who the original inventor was.
I donât see how this is reinventing the wheel? The post makes many references to development economics (11 mentions to be precise). It was not an instance of independently developing something that ended up being close to development economics.
I donât think youâre wrong exactly, but AI takeover doesnât have to happen through a single violent event, or through a treacherous turn or whatever. All of your arguments also apply to the situation with H sapiens and H neanderthalensis, but those factors did not prevent the latter from going extinct largely due to the activities of the former:
There was a cost to violence that humans did against neanderthals
The cost of using violence was not obviously smaller than the benefits of using violenceâthere was a strong motive for the neanderthals to fight back, and using violence risked escalation, whereas peaceful trade might have avoided those risks
There was no one human that controlled everything; in fact, humans likely often fought against one another
You allow for neanderthals to be less capable or coordinated than humans in this analogy, which they likely were in many ways
The fact that those considerations were not enough to prevent neanderthal extinction is one reason to think they are not enough to prevent AI takeover, although of course the analogy is not perfect or conclusive, and itâs just one reason among several. A couple of relevant parallels include:
If alignment is very hard, that could mean AIs compete with us over resources that we need to survive or flourish (e.g., land, energy, other natural resources), similar to how humans competed over resources with neanderthals
The population of AIs may be far larger, and grow more rapidly, than the population of humans, similar to how human populations were likely larger and growing at a faster rate than those of neanderthals
I donât think people object to these topics being heated either. I think there are probably (at least) two things going on:
Thereâs some underlying thing causing some disagreements to be heated/âemotional, and people want to avoid that underlying thing (that could be that it involves exclusionary beliefs, but it could also be that it is harmful in other ways)
Thereâs a reputational risk in being associated with controversial issues, and people want to distance themselves from those for that reason
Either way, I donât think the problem is centrally about exclusionary beliefs, and I also donât think itâs centrally about disagreement. But anyway, it sounds like we mostly agree on the important bits.
Just noting for anyone else reading the parent comment but not the screenshot, that said discussion was about Hacker News, not the EA Forum.
I was a bit confused by this comment. I thought âcontroversialâ commonly meant something more than just âcausing disagreementâ, and indeed I think that seems to be true. Looking it up, the OED defines âcontroversialâ as âgiving rise or likely to give rise to controversy or public disagreementâ, and âcontroversyâ as âprolonged public disagreement or heated discussionâ. That is, a belief being âcontroversialâ implies not just that people disagree over it, but also that thereâs an element of heated, emotional conflict surrounding it.
So it seems to me like the problem might actually be controversial beliefs, and not exclusionary beliefs? For example, antinatalism, communism, anarcho-capitalism, vaccine skepticism, and flat earthism are all controversial, and could plausibly cause the sort of controversy being discussed here, while not being exclusionary per se. (There are perhaps also some exclusionary beliefs that are not that controversial and therefore accepted, e.g., some forms of credentialism, but Iâm less sure about that.)
Of course I agree that thereâs no good reason to exclude topics/âpeople just because thereâs disagreement around themâI just donât think âcontroversialâ is a good word to fence those off, since it has additional baggage. Maybe âcontentiousâ or âtendentiousâ are better?
Perhaps Obamacare might be one example of this in America? I think Trump had a decent amount of rhetoric saying he would repeal it, then didnât do anything when he reached power.
My recollection was that Trump spent quite a lot of effort trying to repeal Obamacare, but in the end didnât get the votes he needed in the Senate. Still, I think your point that actual legislation often looks different from campaign promises is a good one.
Let me see if I can rephrase your argument, because Iâm not sure I get it. As I understand it, youâre saying:
In humans, higher IQ means better performance across a variety of tasks. This is analogous to AI, where more compute/âparameters/âdata etc. means better performance across a variety of tasks.
AI systems tend to share a common underlying architecture, just as humans share the same basic biology.
For humans, when IQ increases, there are improvements across the board, but still specialization, meaning no single human (the one with the most IQ) will be better than all other humans at all of those things.
By analogy: For AIs, when theyâre scaled up, there are improvements across the board, but (likely) still specialization, meaning no single AI (the one with the most compute/âparameters/âdata/âetc.) will be better than all other AIs at all of those things.
Now Iâm a bit unsure about whether youâre saying that you find it extremely unlikely that any AI will be vastly better in the areas I mentioned than all humans, or that you find it extremely unlikely that any AI will be vastly better than all humans and all other AIs in those areas.
If you mean 1-4 to suggest that no AI is will be better than all humans and other AIs, Iâm not sure about whether 4 follows from 1-3, but I think that seems plausible at least. But if this is what you mean, Iâm not sure what youâre original comment (âNote humans are also trained on all those abilities, but no single human is trained to be a specialist in all those areas. Likewise for AIs.â) was meant to say in response to my original comment, which was meant as pushback against the view that AGI would be bad at taking over the planet since it wouldnât be intended for that purpose.
If you mean 1-4 to suggest that no AI will be better than all humans, I donât think the analogy holds, because the underlying factor (IQ versus AI scale/âalgorithms) is different. Like, it seems possible that even unspecialized AIs could just sweep past the most intelligent and specialized humans, given enough time.
For an agent to conquer to world, I think it would have to be close to the best across all those areas
That seems right.
I think this is super unlikely based on it being super unlikely for a human to be close to the best across all those areas
Iâm not sure that follows? I would expect improvements on these types of tasks to be highly correlated in general-purpose AIs. I think weâve seen that with GPT-3 to GPT-4, for example: GPT-4 got better pretty much across the board (excluding the tasks that neither of them can do, and the tasks that GPT-3 could already do perfectly). That is not the case for a human who will typically improve in just one domain or a few domains from one year to the next, depending on where they focus their effort.
Yes, thatâs true. Can you spell out for me what you think that implies in a little more detail?
A 10-fold increase in the number of GPUs above GPT-5 would require a 1 to 2.5 GW data center, which doesnât exist and would take years to build, OR would require decentralized training using several data centers. Thus GPT-5 is expected to mark a significant slowdown in scaling runs.
Why do you think decentralized training using several data centers will lead to a significant slowdown in scaling runs? Gemini was already trained across multiple data centers.
Interesting post! Another potential downside (which I donât think you mention) is that strict liability could disincentivize information sharing. For example, it could make AI labs more reluctant to disclose new dangerous capabilities or incidents (when thatâs not required by law). That information could be valuable for other AI labs, for regulators, for safety researchers, and for users.
Thank you for writing this! I love rats and found thisâand especially watching the video of the rodent farm and reading your account of the breeder visitâdistressing and pitiful.
Can you specify what you mean with â2.7x is a ridiculous numberâ?
I ask because it does happen that economies grow like that in a fairly short amount of time. For example, since the year 2000:
Chinaâs GDPpc 2.7xâd about 2.6 times
Vietnamâs did it ~2.4 times
Ethiopiaâs ~2.1 times
Indiaâs ~1.7 times
Rwandaâs ~1.3 times
The USâs GDPpc is on track to 2.7x from 2000 in about 2029, assuming a 4% annual increase
So I assume you donât mean something like â2.7x never happensâ. Do you mean something more like âitâs hard to find policies that produce 2.7x growth in a reasonable amount of timeâ or âtypically it takes economies decades to 2.7xâ?
I think the biggest danger to that reasoning is the premise that they are caused by GDP, and only by gdp, which I quite flatly dispute.
Well, this seems like something that is actually worth finding out. Because if it is the case that GDP (/â GDP per capita) does have a significant causal influence on one (or more) of them, then you are conditioning on a mediator, (partially) hiding the causal effect of GDP on the outcome. It seems to me like your model assumes that GDP does not have any casual influence on any of these variables, which seems like a pretty strong assumption. Unless I am misunderstanding something.
(ETA: Similarly, if both GDP and life satisfaction causally influence one of the variables, you are conditioning on a collider. That could introduce a spurious negative correlation masking a real correlation between GDP and life satisfaction, via Berksonâs paradox. For example, suppose both life satisfaction and GDP cause social stability. Then, when you stratify by social stability, it would not be surprising to find a spurious negative correlation between GDP and life satisfaction, because a high-social-stability country, if it happens to have relatively low GDP, must have very high life satisfaction in order to achieve high social stability, and vice versa.)
Any attempt of a defense of GDP, specifically, needs to take into the account the fact that itâs just a deeply flawed measure of value. Thatâs why econ nobelists have been arguing against it for over a decade (and likely much longer, given that whole international reports were being published on it in 2012). So even if it *were *more predictive than the model suggests, that still wouldnât address the fact itâs known to be misleading, all on its own, and not something I would spend a lot of time defending on the merits.
My understanding of these critiques is that they say either that (1) GDP is not intrinsically valuable, (2) GDP does not measure perfectly anything that we care about, or fails to measure many things that we care about, and/âor (3) GDP focuses too narrowly on quantifiable economical transactions.
But if you were to find empirically that GDP causes something we do care about, e.g., life satisfaction, then I donât understand how those critiques would be relevant? (1) would not be relevant because we donât care about increasing GDP for its own sake, only in order to increase life satisfaction. (2) would not be relevant because whatever GDP would or would not succeed in measuring, it does measure something, and it would be desirable to increase whatever it measures (since whatever that is, causes life satisfaction). (3) would not be relevant because whatever does or does not go into the measure, again, it does measure something, and it would be desirable to increase whatever it measures.
But perhaps the most definitive argument against the unique value of gdp is in simple counterexamples. Between 2005 and 2022, Costa Rica had a higher life satisfaction than the United States, with less than a third of the GDPpc. This simply wouldnât be possible, if gdp just bought you happiness. Ergo, that simply cannot be the answer.
Your reductio shows that GDP cannot be the only thing that has a causal influence on life satisfaction (assuming measurements are good, etc.). But I donât think OP or anyone else in this comment section is saying that GDP/âwealth/âmoney is the only thing that influences life satisfaction, only at most that it is one thing that has a comparatively strong influence on it. And your counterexample does not disprove that.
I donât know if these things make it robustly good, but some considerations:
Raising and killing donkeys for their skin seems like it could scale up more than the use of working donkeys, since (1) there may be increasing demand for donkey skin as China develops economically, and (2) there may be diminishing demand for working donkeys as Africa develops economically. So it could be valuable to have a preemptive norm/âban against slaughtering donkeys for this use, even if the short-term effect is net-negative.
It is not obvious that working donkeys have net-negative lives. My impression is that their lives are substantially better than the lives of most factory farmed animals, though that is a low bar. One reason to think that is the case is that working donkeysâ owners live more closely to, and are more dependent on, their animals, than operators of factory farms, meaning they benefit more from their animals being healthy and happy.
Markets in donkey skin could have some pretty bad externalities, e.g., with people who rely on working donkeys for a living seeing their animals illegally poached. (On the other hand, this ban could also make such effects worse, by pushing the market underground.) Meanwhile, working donkeys do useful work, so they probably improve human welfare a bit. (I doubt donkey skin used for TCM improves human welfare.)
On non-utilitarian views, you may place relatively more value on not killing animals, and/âor relatively less value on reducing suffering. So if you give some weight to those views, that may be another reason to think this ban is net positive.
That makes sense. I agree that capitalism likely advances AI faster than other economical systems. I just donât think the difference is large enough for economic system to be a very useful frame of analysis (or point of intervention) when it comes to existential risk, let alone the primary frame.
Thank you for writing this. I thought it was an interesting article. I want to push back a bit against the claim that AI risk should primarily or even significantly be seen as a problem of capitalism. You write:
[I]n one corner are trillion-dollar companies trying to make AI models more powerful and profitable; in another, you find civil society groups trying to make AI reflect values that routinely clash with profit maximization.
In short, itâs capitalism versus humanity.
I do think it is true that the drive to advance AI technology faster makes AI safety harder, and that competition under the capitalist system is one thing generating this drive. But I donât think this is unique to capitalism, or that things would be much better under some other economic system.
The Soviet Union was not capitalist, and yet it developed dangerous nuclear weapons and bioweapons. It put tremendous resources into technological development, e.g., space technology, missile systems, military aircraft, etc. I couldnât find figures for the Cold War as a whole, but in 1980 the USSR outspent the US (and Japan, and Germany) on R&D, in terms of % of GDP. And it did not seem to do better at developing these technologies in a safe way than capitalist countries did (cf. Sverdlovsk, Chernobyl).
If you look at what is probably the second most capable country when it comes to AI, China, you see an AI industry driven largely by priorities set by the state, and investment partly directed or provided by the state. China also has markets, but the Chinese state is highly interested in advancing AI progress, and I see no reason why this would be different under a non-market-based system. This is pretty clear from e.g., its AI development plan and Made in China 2025, and it has much more to do with national priorities (of security, control, strategic competition, and economic growth) than free market competition.
That still does not seem like reinventing the wheel to me. My read of that post is that itâs not saying âEAs should do these analyses that have already been done, from scratchâ but something closer to âEAs should pay more attention to strategies from development economics and identify specific, cost-effective funding opportunities thereâ. Unless you think development economics is solved, there is presumably still work to be done, e.g., to evaluate and compare different opportunities. For example, GiveWell definitely engages with experts in global health, but still also needs to rigorously evaluate and compare different interventions and programs.
And again, the article mentions development economics repeatedly and cites development economics textsâwhy would someone mention a field, cite texts from a field, and then suggest reinventing it without giving any reason?