Technoprogressive, biocosmist, rationalist, defensive accelerationist, longtermist
Matrice Jacobine🔸🏳️⚧️
Anthropic: “Statement from Dario Amodei on our discussions with the Department of War”
Scoop: Pentagon takes first step toward blacklisting Anthropic
This is about the RSP v3 (front-page post), wholly unrelated.
Exclusive: Hegseth gives Anthropic until Friday to back down on AI safeguards
My mistake!
Alignment to Evil
AI and Nationalism Are a Deadly Combination
Exclusive: Key US infectious-diseases centre to drop pandemic preparation
Pentagon’s use of Claude during Maduro raid sparks Anthropic feud
“Longtermists should primarily concern themselves with the lives/welfare/rights/etc. of future non-human minds, not humans.”
“AI safety advocates should primarily seek an understanding with {AI ethics advocates,AI acceleration advocates}.”
“It would be preferable for progress of open-weights models to keep up with progress of closed-weights models.”
“Countering democratic backsliding is now a more urgent issue than more traditional longtermist concerns.”
This was a linkpost, I didn’t write that paper.
I’m not sure how you’re defining nihilism there?
The term date from at least 2009.
Up until recently, there was no name for the cluster of views that involved concern about ensuring the long-run future goes as well as possible. The most common language to refer to this cluster of views was just to say something like ‘people interested in x-risk reduction’. There are a few reasons why this terminology isn’t ideal [...]
For these reasons, and with Toby Ord’s in-progress book on existential risk providing urgency, Toby and Joe Carlsmith started leading discussions about whether there were better terms to use. In October 2017, I proposed the term ‘longtermism’, with the following definition:
Yes. One of the Four Focus Areas of Effective Altruism (2013) was “The Long-Term Future” and “Far future-focused EAs” are on the map of Bay Area memespace (2013). This social and ideological cluster has existed long before this exact name was coined to refer to it.
Democracy promotion is a common interest of many causes. It’s highly unlikely we can do anything about (potentially, will ever be able to do anything about again) global poverty, factory farming, or existential risk, if all world powers become repressive autocracies squashing any sign of moral cosmopolitanism and freethought.