Three specific good things from the paper which I’d like to highlight:
Their concept of “attractor states” seemed useful to me.
It’s similar to the existing ideas of existential catastrophe, lock-in (e.g., value lock-in), and trajectory change. But it’s a bit different, and I think having multiple concepts/models/framings is often useful.
The distinction between axiological strong longtermism and deontic strong longtermism is interesting.
Axiological strong longtermism is the claim that “In a wide class of decision situations, the option that is ex ante best is contained in a fairly small subset of options whose ex ante effects on the very long-run future are best”
Deontic strong longtermism is the claim that “In a wide class of decision situations, the option one ought, ex ante, to choose is contained in a fairly small subset of options whose ex ante effects on the very long-run future are best”
I thought that the section of world government was a handy summary of important ideas on that topic (which is, in my view, under-discussed in EA).
And when we build some model like this, we’re focusing attention on some aspects of [the world]. And because attention is a bit of a limited resource, it’s pulling attention away from other things. And so if we say, “Well, we want to analyze everything in terms of these abstract defense layers,” it’s pulling attention away from, “Okay, let’s just understand what we currently guess are the biggest risks,” and going in and analyzing those on a case by case basis.
And I tend to think that the right approach is not to say, “Well, we just want to look for the model which is making the best set of trade offs here”, and is more to say, “We want to step in and out and try different models which have different lenses that they’re bringing on the problem and we’ll try and understand it as much as possible from lots of different angles”. Maybe we take an insight that we got from one lens and we try and work out, “Okay, how do we import that and what does it mean in this other interpretation?
Three specific good things from the paper which I’d like to highlight:
Their concept of “attractor states” seemed useful to me.
It’s similar to the existing ideas of existential catastrophe, lock-in (e.g., value lock-in), and trajectory change. But it’s a bit different, and I think having multiple concepts/models/framings is often useful.
The distinction between axiological strong longtermism and deontic strong longtermism is interesting.
Axiological strong longtermism is the claim that “In a wide class of decision situations, the option that is ex ante best is contained in a fairly small subset of options whose ex ante effects on the very long-run future are best”
Deontic strong longtermism is the claim that “In a wide class of decision situations, the option one ought, ex ante, to choose is contained in a fairly small subset of options whose ex ante effects on the very long-run future are best”
I thought that the section of world government was a handy summary of important ideas on that topic (which is, in my view, under-discussed in EA).
See also posts tagged Totalitarianism or Global governance.
(These were not necessarily the most important good things about the paper, and were certainly not the only ones.)
Tangent: A quote to elaborate on why I think having multiple concepts/models/framings is often useful.
This quote is from Owen Cotton-Barratt on the 80,000 Hours Podcast, and it basically matches my own views: