Milan_Griffes
[Link] Centre for Applied Eschatology
Some back-and-forth on this between Eliezer & me in this thread.
Compare the number of steps required for an agent to initiate the launch of existing missiles to the number of steps required for an agent to build & use a missile-launching infrastructure de novo.
Here’s Ben Hoffman on burnout & building community institutions: Humans need places
That thread branches sorta crazily, here’s the current bottom of one path.
This is the Ben Hoffman essay I had in mind: Against responsibility
(I’m more confused about his EA is self-recommending)
This orientation resonates with me too fwiw.
Existing nuclear weapon infrastructure, especially ICBMs, could be manipulated by a powerful AI to further its goals (which may well be orthogonal to our goals).
Researching valence for AI alignment
Artificial Intelligence, Values and Reflective Processes
In psychology, valence refers to the attractiveness, neutrality, or aversiveness of subjective experience. Improving our understanding of valence and its principal components could have large implications for how we approach AI alignment. For example, determining the extent to which valence is an intrinsic property of reality could provide computer-legible targets to align AI towards. This could be investigated experimentally: the relationship between experiences and their neural correlates & subjective reports could be mapped out across a large sample of subjects and cultural contexts.
Nuclear arms reduction to lower AI risk
Artificial Intelligence and Great Power Relations
In addition to being an existential risk in their own right, the continued existence of large numbers of launch-ready nuclear weapons also bears on risks from transformative AI. Existing launch-ready nuclear weapon systems could be manipulated or leveraged by a powerful AI to further its goals if it decided to behave adversarially towards humans. We think understanding the dynamics of and policy responses to this topic are under-researched and would benefit from further investigation.
Researching the relationship between subjective well-being and political stability
Great Power Relations, Values and Reflective Processes
Early research has found a strong association between a society’s political stability and the reported subjective well-being of its population. Political stability appears to be a major existential risk factor. Better understanding this relationship, perhaps by investigating natural experiments and running controlled experiments, could inform our views of appropriate policy-making and intervention points.
This seems like a good entry for the Future Fund prize competition
High-quality human performance is much more engaging than autogenerated audio, fwiw.
This got a nice shout-out on Marginal Revolution today.
Thanks for doing this!
For all three – how would you like to see EA participate in the psychedelic renaissance? What do you think a good marriage of the two communities would look like?
I don’t think we yet are collectively wise enough to engage in memetic and/or tech projects that undermine evolutionary equilibria, fwiw.
QRI = the Qualia Research Institute
80k wubstepping all night long
+1