Three specific good things from the paper which Iād like to highlight:
Their concept of āattractor statesā seemed useful to me.
Itās similar to the existing ideas of existential catastrophe, lock-in (e.g., value lock-in), and trajectory change. But itās a bit different, and I think having multiple concepts/āmodels/āframings is often useful.
The distinction between axiological strong longtermism and deontic strong longtermism is interesting.
Axiological strong longtermism is the claim that āIn a wide class of decision situations, the option that is ex ante best is contained in a fairly small subset of options whose ex ante effects on the very long-run future are bestā
Deontic strong longtermism is the claim that āIn a wide class of decision situations, the option one ought, ex ante, to choose is contained in a fairly small subset of options whose ex ante effects on the very long-run future are bestā
I thought that the section of world government was a handy summary of important ideas on that topic (which is, in my view, under-discussed in EA).
And when we build some model like this, weāre focusing attention on some aspects of [the world]. And because attention is a bit of a limited resource, itās pulling attention away from other things. And so if we say, āWell, we want to analyze everything in terms of these abstract defense layers,ā itās pulling attention away from, āOkay, letās just understand what we currently guess are the biggest risks,ā and going in and analyzing those on a case by case basis.
And I tend to think that the right approach is not to say, āWell, we just want to look for the model which is making the best set of trade offs hereā, and is more to say, āWe want to step in and out and try different models which have different lenses that theyāre bringing on the problem and weāll try and understand it as much as possible from lots of different anglesā. Maybe we take an insight that we got from one lens and we try and work out, āOkay, how do we import that and what does it mean in this other interpretation?
Three specific good things from the paper which Iād like to highlight:
Their concept of āattractor statesā seemed useful to me.
Itās similar to the existing ideas of existential catastrophe, lock-in (e.g., value lock-in), and trajectory change. But itās a bit different, and I think having multiple concepts/āmodels/āframings is often useful.
The distinction between axiological strong longtermism and deontic strong longtermism is interesting.
Axiological strong longtermism is the claim that āIn a wide class of decision situations, the option that is ex ante best is contained in a fairly small subset of options whose ex ante effects on the very long-run future are bestā
Deontic strong longtermism is the claim that āIn a wide class of decision situations, the option one ought, ex ante, to choose is contained in a fairly small subset of options whose ex ante effects on the very long-run future are bestā
I thought that the section of world government was a handy summary of important ideas on that topic (which is, in my view, under-discussed in EA).
See also posts tagged Totalitarianism or Global governance.
(These were not necessarily the most important good things about the paper, and were certainly not the only ones.)
Tangent: A quote to elaborate on why I think having multiple concepts/āmodels/āframings is often useful.
This quote is from Owen Cotton-Barratt on the 80,000 Hours Podcast, and it basically matches my own views: