Technoprogressive, biocosmist, rationalist, defensive accelerationist, longtermist
Matrice Jacobinešøš³ļøāā§ļø
Iām not sure how youāre defining nihilism there?
The term date from at least 2009.
Up until recently, there was no name for the cluster of views that involved concern about ensuring the long-run future goes as well as possible. The most common language to refer to this cluster of views was just to say something like āpeople interested in x-risk reductionā. There are a few reasons why this terminology isnāt ideal [...]
For these reasons, and with Toby Ordās in-progress book on existential risk providing urgency, Toby and Joe Carlsmith started leading discussions about whether there were better terms to use. In October 2017, I proposed the term ālongtermismā, with the following definition:
Yes. One of the Four Focus Areas of Effective Altruism (2013) was āThe Long-Term Futureā and āFar future-focused EAsā are on the map of Bay Area memespace (2013). This social and ideological cluster has existed long before this exact name was coined to refer to it.
Iām not sure what StopAI meant by Mr. Kirchner not havingāto its knowledgeāāyet crossed a line [he] canāt come back from,ā but to be clear: his time working on AI issues in any capacity has to be over.
This unfortunately do not seem to be StopAIās stance.
One point I made that didnāt come across:
- Scaling the current thing will keep leading to improvements. In particular, it wonāt stall.
- But something important will continue to be missing.
OpenAI Locks Down San FranĀcisco Offices FolĀlowĀing Alleged Threat From Activist
Social media recommendation algorithms are typically based on machine learning and generally fall under the purview of near-term AI ethics.
Sorry, I donāt know where I got that R from.
Iām giving a ā to this overall, but I should add that conservative AI policy think tanks like FAI are probably overall accelerating the AI race, which should be a worry for both AI x-risk EAs and near-term AI ethicists.
ExĀcluĀsive: Hereās the draft Trump exĀecĀuĀtive orĀder on AI preemption
The end of progress against exĀtreme poverty?
You can formally mathematically prove a programmable calculator. You just canāt formally mathematically prove every possible programmable calculator. On the other hand, if you canāt mathematically prove a given programmable calculator, it might be a sign that your design is an horrible sludge. On the other other hand, deep-learnt neural networks are definitionally horrible sludge.
Yes those quotes do refer to the need for a model to develop heterogeneous skills based on private information, and to adapt to changing situations in real life with very little data. I donāt see your problem.
How is āheterogeneous skillsā based on private information and āadapting to changing situation in real time with very little dataā not what continual learning mean?
1) physical limits to scaling, 2) the inability to learn from video data, 3) the lack of abundant human examples for most human skills, 4) data inefficiency, and 5) poor generalization
All of those except 2) boil down to āfoundation models have to learn once and for all through training on collected datasets instead of continually learning for each instantiationā. See also AGIās Last Bottlenecks.
But the environment (and animal welfare) is still worse off in post-industrial societies than pre-industrial societies, so you cannot credibly claim going from pre-industrial to industrial (which is what we generally mean by global health and development) is an environmental issue (or an animal welfare issue). Itās unclear if helping societies go from industrial to post-industrial is tractable, but that would typically fall under progress studies, not global health and development.
This was a linkpost, I didnāt write that paper.