Technoprogressive, biocosmist, rationalist, defensive accelerationist, longtermist
Matrice Jacobine
anti-LLM arguments from people like Yann LeCun and François Chollet
François Chollet has since adjusted his AGI timelines to 5 years.
While this is ostensibly called “strong longtermism”, the precision of saying “near-best” instead of “best” makes (i) hard to deny (the opposite statement would be “one ought to choose an option that is significantly far from the best for the far future”). The best cruxes against (ii) would be epistemic ones i.e. whether benefits rapidly diminish or wash out over time.
I agree with you on the meta case of suspicion about Open Philanthropy leadership but in this case AFAICT the Center for AI Policy was funded by the Survival and Flourishing Fund, which is aligned with the rationalist cluster and also funds PauseAI.
There’s a decent amount of French-speaking ~AI safety content on YouTube:
@Shaïman Thürler’s channel Le Futurologue
@Gaetan_Selle 🔷 ’s channel The Flares
@len.hoang.lnh’s channel Science4All and Thibaut Giraud’s channel Monsieur Phi, the two channels cited by 9 of the 17 people citing a YouTube channel as where they first heard of EA in the 2020 EA Survey
The CeSIA advised for the production of this video, which reached nearly five million views by the time I am writing this (i.e. plausibly >10% of the French adult population)
The CeSIA also gave an interview to popular scientific-skeptic channel La Tronche en Biais, which has expressed interest in posting more on the topic of AI safety more recently
David Louapre, who runs Science Étonnante, one of the most popular French-language science channels, has just this week announced pivoting to working as an AI safety researcher at Hugging Face, so it’s possible more will come from that direction too
I added a bunch of relevant tags to your post that might help you search the forum better.
Do you think work on AI welfare can count as part of Cooperative AI (i.e. as fostering cooperation between biological minds and digital minds)?
Nvidia Comes Out Swinging as Congress Weighs Limits on China Chip Sales
It strikes me as very unlikely that a rudimentary Pong-playing AI running on biological wetware is more sentient than a modern LLM running on digital hardware.
One of the killings was, as far as we know, purely mimetic and (allegedly) made by someone (@Maximilian Snyder) who never even interacted online with Ziz, so I don’t think it’s an invalid example to bring up actually.
This is a very big “besides”!
I’ve known EAs who have been all-consumed by abstract guilt. It has never led them to producing the greatest good for the greatest number. At best it led them to being chronically depressed and unable to do any stable work. At worst it has led to highly net negative actions like joining a cult.
Thiel describing a 2024 conversation with Elon Musk and Demis Hassabis, where Elon is saying “I’m working on going to mars, it’s the most important project in the world” and Demis argues “actually my project is the most important in the world; my superintelligence will change everything, and it will follow you to mars”. (This is in the context of Thiel’s long pivot from libertarianism to a darker strain of conservativism / neoreaction, having realized that that “there’s nowhere else to go” to escape mainstream culture/civilization, that you can’t escape to outer space, cyberspace, or the oceans as he once hoped, but can only stay and fight to sieze control of the one future (hence all these musings about carl schmidtt and etc that make me feel wary he is going to be egging on J D Vance to try and auto-coup the government).
FTR: while Thiel has already claimed this version before, the more common version (e.g. here, here, here from Hassabis’ mouth, and more obliquely here in his lawsuit against Altman) is that Hassabis was warning Musk about existential risk from unaligned AGI, not threatening him with his own personally aligned AGI. However, this interpretation is interestingly resonant with Elon Musk’s creation of OpenAI being motivated by fear of Hassabis becoming an AGI dictator (a fear his co-founders apparently shared). It is certainly an interesting hypothesis that Thiel and Musk engineered together for a decade both the AGI race and global democratic backsliding wholly motivated by a same single one-sentence possible slight by Hassabis in 2012.
If there is nuclear war without nuclear winter, there would be a dramatic loss of industrial capability which would cascade through the global system. However, being prepared to scale up alternatives such as wood gas powered vehicles producing electricity would significantly speed recovery time and reduce mortality. I think if there is less people killing each other over scarce resources, values would be better, so global totalitarianism would be less likely and bad values locked into AI would be less likely. Similarly, if there is nuclear winter, I think the default is countries banning trade and fighting over limited food. But if countries realized they could feed everyone if they cooperated, I think cooperation is more likely and that would result in better values for the future.
The overwhelming majority of Manhattan Project scientists, as well as the Undersecretary of the Navy, believed there should be a warning shot. It makes total sense from a game theory perspective to do warning shots when you believe your military advantage has significantly increased in a way that significantly change their own calculus.
Somewhere I remember Thiel explicitly explaining this (ie, saying “we need to repair the intergenerational compact so all these young people stop turning socialist”), but unfortunately I don’t remember where he said this so I don’t have a link.
https://www.techemails.com/p/mark-zuckerberg-peter-thiel-millennials
Current LLMs already have some level of biological capabilities and near-zero contribution to cumulative GDP growth. The assertion that “there’s a huge gulf between capabilities that can get you ~10% cumulative GDP growth and capabilities that can kill billions of people” seems to imply believing biological capabilities will scale orders of magnitude less than capabilities in every other field required to contribute to GDP, and I see absolutely no evidence to believe that.
there’s a huge gulf between capabilities that can get you ~10% cumulative GDP growth and capabilities that can kill billions of people
This is not clear to me and my impression is that most AI safety people would disagree with this statement as well, considering the high generality of AI capabilities.
China proposes new global AI cooperation organisation
Just a month ago, Anthropic and the rest of the industry were celebrating what looked like a landmark victory. Alsup had ruled that using copyrighted books to train an AI model — so long as the books were lawfully acquired — was protected as “fair use.” This was the legal shield the AI industry has been banking on, and it would have let Anthropic, OpenAI, and others off the hook for the core act of model training.
But Alsup split a very fine hair. In the same ruling, he found that Anthropic’s wholesale downloading and storage of millions of pirated books — via infamous “pirate libraries” like LibGen and PiLiMi — was not covered by fair use at all. In other words: training on lawfully acquired books is one thing, but stockpiling a central library of stolen copies is classic copyright infringement.
Most of OpenAI’s 2024 compute went to experiments