See explainer on why AGI could not be controlled enough to stay safe:
https://www.lesswrong.com/posts/xp6n2MG5vQkPpFEBH/the-control-problem-unsolved-or-unsolvable
I post here about preventing unsafe AI.
Note that I’m no longer part of EA, because of overreaches I saw during my time in the community (core people leading technocratic projects with ruinous downside risks, a philosophy based around influencing consequences over enabling collective choice-making, and a culture that’s bent on proselytising both while not listening deeply enough to integrate other perspectives).
Update: back up to 70% chance.
Just spent two hours compiling different contributing factors. Now I weighed those factors up more comprehensively, I don’t expect to change my prediction by more than ten percentage points over the coming months. Though I’ll write here if I do.
My prediction: 70% chance that by August 2029 there will be a large reduction in investment in AI and a corresponding market crash in AI company stocks, etc, and that both will continue to be for at least three months.
For:
Large model labs losing money
OpenAI made loss of ~$5 billion last year.
Takes most of the consumer and enterprise revenue, but still only $3.7 billion.
GPT 4.5 model is the result of 18 months of R&D, but only a marginal improvement in output quality, while even more compute intensive.
If OpenAI publicly fails, as the supposed industry leader, this can undermine the investment narrative of AI as a rapidly improving and profitable technology, and trigger a market meltdown.
Commoditisation
Other models by Meta, etc, around as useful for consumers.
DeepSeek undercuts US-designed models with compute-efficient open-weights alternative.
Data center overinvestment
Microsoft cut at least 14% of planned data center expansion.
Subdued commercial investment interest.
Some investment firm analysts skeptical, and second-largest VC firm Sequoia Capital also made a case of lack of returns for the scale of investment ($600+ billion).
SoftBank is the main other backer of the Stargate data center expansion project, and needs to raise debt to do raise ~$18 billion. OpenAI also needs to raise more investment funds next round to cover ~$18 billion, with question whether there is interest
Uncertainty US government funding
Mismatch between US Defense interest and what large model labs are currently developing.
Model ‘hallucinations’ get in the way of deployment of LLMs on the battlefield, given reliability requirements.
On the other hand, this hasn’t prevented partnerships and attempts to deploy models.
Interest in data analysis of integrated data streams (e.g. by Palantir) and in self-navigating drone systems (e.g. by Anduril).
The Russo-Ukrainian war and Gaza invasion have been testbeds, but seeing relatively rudimentary and straightforward AI models being used there (Ukraine drones are still mostly remotely operated by humans, and Israel used an LLM for shoddy target identification).
No clear sign that US administration is planning to subsidise large model development.
Stargate deal announced by Trump did not involve government chipping in money.
Likelihood of a (largish) US economic recession by 2029.
Debt/misinvestment overload after long period of low interest.
Early signs, but nothing definitive:
Inflation
Reduced consumer demand
Business uncertainty amidst changing tariffs.
Generative AI subscriptions seem to be a luxury expense for most people rather than essential for completing work (particularly because ~free alternatives exist to switch to and for most users those aren’t significantly different in use). Enterprises and consumers could cut heavily on their subscriptions once facing a recession.
Early signs of large progressive organising front, hindering tech-conservative allyships.
#TeslaTakedown.
Various conversations by organisers with a renewed motivation to be strategic.
Last few years’ resurgence of ‘organising for power’ union efforts, overturning top-down mobilising and advocacy approaches.
Increasing awareness of fuck-ups in the efficiency drives by Trump-Musk administration coalition.
Against:
Current US administration’s strong public stance on maintaining America’s edge around AI.
Public announcements.
JD Vance’s speech at the renamed AI Action Summit.
Clearing out regulation
Scrapped Biden AI executive order.
Copyright
Talks as in UK and EU about effectively scrapping copyright for AI training materials (with opt-out laws, or by scrapping opt-out too).
Stopping enforcement of regulation
Removing Lina Khan at head of FTC, which were investigating AI companies.
Musk internal dismantling of departments engaged in oversight.
Internal deployment of AI model for (questionable) uses.
US IRS announcement.
DOGE attempts of using AI to automate evaluation and work by bureacrats.
Accelerationist lobby’s influence been increasing.
Musk, Zuckerberg, Andreessen, other network state folks, etc, been very strategic in
funding and advising politicians,
establishing coalitions with people on the right (incl. Christian conservatives, and channeling populist backlashes against globalism and militant wokeness),
establishing social media platforms for amplifying their views (X, network of popular independent podcasts like Joe Rogan show).
Simultaneous gutting of traditional media.
Faltering anti-AI lawsuits
Signs of corruption of plaintiff lawyers,
e.g. in case against Meta, where crucial arguments were not made, and judge considered not allowing class representation.
Defense contracts
US military has budget in the trillions of dollars, and could in principle keep the US AI corporations propped up.
Possibility that something changes geopolitically (war threat?) resulting in large funds injection.
Guess Pentagon already treating AGI labs such as OpenAI and Anthropic as a strategic asset (to control, and possibly prop up if their existence is threatened).
Currently seeing cross-company partnerships.
OpenAI with Anduril, Anthropic with Palantir.
National agenda pushes to compete in various countries.
Incl. China, UK, EU.
Recent increased promotion/justification in and around US political circles of the need to compete with China.
New capability development
Given the scale of AI research happening now, it is quite possible that some teams will develop of new cross-domain-optimising model architecture that’s data and compute efficient.
As researchers come to acknowledge the failure of the ‘scaling laws’ focussed approach using existing transformer architectures (given limited online-available data, and reduced marginal returns on compute), they will naturally look for alternative architecture designs to work on.