Technoprogressive, biocosmist, rationalist, defensive accelerationist, longtermist
Matrice Jacobinešøš³ļøāā§ļø
otoh, this is funny:
Peter Thiel, the tech billionaire and a frequent Gates critic, said in an interview that he had privately encouraged around a dozen Giving Pledge signers to undo it. āMost of the ones Iāve talked to have at least expressed regret about signing it,ā he said. He has his own Epstein ties, but he calls the Pledge an āEpstein-adjacent, fake Boomer club.ā
Fully autonomous weapons seems to me to be a clear-cut case of differential acceleration in any case: not giving any kind of legitimate battlefield advantage for law-abiding democratic countries (human reflexes are top of the sigmoid; this is one of our main evolutionarily-selected skills for obvious reasons), but allowing authoritarians to establish a military dictatorship with minimal staff (historically āthe army is ultimately made up of ordinary people who can refuse to shoot their brethren and/āor shoot the dictator insteadā have been an important pressure valve), or to organize genocidal massacres with automated recognition of targeted civilians (i.e. the FLI Slaughterbots scenario).
Democracy promotion is a common interest of many causes. Itās highly unlikely we can do anything about (potentially, will ever be able to do anything about again) global poverty, factory farming, or existential risk, if all world powers become repressive autocracies squashing any sign of moral cosmopolitanism and freethought.
This is about the RSP v3 (front-page post), wholly unrelated.
My mistake!
āLongtermists should primarily concern themselves with the lives/āwelfare/ārights/āetc. of future non-human minds, not humans.ā
āAI safety advocates should primarily seek an understanding with {AI ethics advocates,AI acceleration advocates}.ā
āIt would be preferable for progress of open-weights models to keep up with progress of closed-weights models.ā
āCountering democratic backsliding is now a more urgent issue than more traditional longtermist concerns.ā
This was a linkpost, I didnāt write that paper.
Iām not sure how youāre defining nihilism there?
The term date from at least 2009.
Up until recently, there was no name for the cluster of views that involved concern about ensuring the long-run future goes as well as possible. The most common language to refer to this cluster of views was just to say something like āpeople interested in x-risk reductionā. There are a few reasons why this terminology isnāt ideal [...]
For these reasons, and with Toby Ordās in-progress book on existential risk providing urgency, Toby and Joe Carlsmith started leading discussions about whether there were better terms to use. In October 2017, I proposed the term ālongtermismā, with the following definition:
Yes. One of the Four Focus Areas of Effective Altruism (2013) was āThe Long-Term Futureā and āFar future-focused EAsā are on the map of Bay Area memespace (2013). This social and ideological cluster has existed long before this exact name was coined to refer to it.
Iām not sure what StopAI meant by Mr. Kirchner not havingāto its knowledgeāāyet crossed a line [he] canāt come back from,ā but to be clear: his time working on AI issues in any capacity has to be over.
This unfortunately do not seem to be StopAIās stance.
One point I made that didnāt come across:
- Scaling the current thing will keep leading to improvements. In particular, it wonāt stall.
- But something important will continue to be missing.
It seems unfortunately plausible that despite technological progress toward alternatives to meat, humans have a revealed terminal preference for animal suffering, which mean that short of extinction we are on a default trajectory to astronomical suffering.