In negative longtermism, we sometimes invoke this concept of existential security (which i’ll abbreviate to xsec), the idea that at some point the future is freed from xrisk, or we have in some sense abolished the risk of extinction.
One premise for the current post is that in a veil of ignorance sense, affluent and smart humans alive in the 21st century have duties/responsibilities/obligations, (unless they’re simply not altruistic at all), derived from Most Important Century arguments.
I think it’s tempting to say that the duty—the ask—is to obtain existential security. But I think this is wildly too hard, and I’d like to propose a kind of different framing
Xsec is a delusion
I don’t think this goal is remotely obtainable. Rather, I think the law of mad science implies that either we’ll obtain a commensurate rate of increase in vigilance or we’ll die. “Security” implies that we (i.e. our descendants) can relax at some point (as the minimum IQ it takes to kill everyone drops further and further). I think this is delusional, and Bostrom says as much in the Vulnerable World Hypothesis (VWH).
I think the idea that we’d obtain xsec is unnecessarily utopian, and very misleading.
Instead of xsec summed over the whole future, zero in on subsequent 1-3 generations, and pour your trust into induction
Obtaining xsec seems like something you don’t just do for your grandkids, or for the 22nd century, but for all the centuries in the future.
I think this is too tall an order. I think that instead of trying something that’s too hard and we’re sure to fail at, we should initialize a class or order of protectors who zero in on getting their 1-3 first successor generations to make it.
In math/computing, we reason about infinite structures (like the whole numbers) by asking what we know about “the base case” (i.e., zero) and by asking what we know about constructions assuming we already know stuff about the ingredients to those constructors (i.e., we would like for what we know about n to be transformed into knowledge about n+1). This is the way I’m thinking about how we can sort of obtain xsec just not all at once. There are no actions we can take to obtain xsec for the 25th century, but if every generation 1. protects their own kids, grandkids, and great-grandkids, and 2. trains and incubates a protector order from among the peers of their kids, grandkids, and great-grandkids, then overall the 25th century is existentially secure.
Yes, the realities of value drift make it really hard to simply trust induction to work. But I think it’s a much better bet than searching for actions you can take to directly impact arbitrary centuries.
I think when scifis like dune or foundation reasoned about this, there was a sort of intergenerational lock-in, people are born into this order, they have destinies and fates and so on, whereas I think in real life people can opt-in and opt-out of it. (but I think the 0 IQ approach to this is to just have kids of your own and indoctrinate them, which may or may not even work).
But overall, I think the argument that accumulating cultural wisdom among cosmopolitans, altruists, whomever is the best lever we have right now is very reasonable (especially if you take seriously the idea that we’re in the alchemy era of longtermism).
In negative longtermism, we sometimes invoke this concept of existential security (which i’ll abbreviate to xsec), the idea that at some point the future is freed from xrisk, or we have in some sense abolished the risk of extinction.
One premise for the current post is that in a veil of ignorance sense, affluent and smart humans alive in the 21st century have duties/responsibilities/obligations, (unless they’re simply not altruistic at all), derived from Most Important Century arguments.
I think it’s tempting to say that the duty—the ask—is to obtain existential security. But I think this is wildly too hard, and I’d like to propose a kind of different framing
Xsec is a delusion
I don’t think this goal is remotely obtainable. Rather, I think the law of mad science implies that either we’ll obtain a commensurate rate of increase in vigilance or we’ll die. “Security” implies that we (i.e. our descendants) can relax at some point (as the minimum IQ it takes to kill everyone drops further and further). I think this is delusional, and Bostrom says as much in the Vulnerable World Hypothesis (VWH).
I think the idea that we’d obtain xsec is unnecessarily utopian, and very misleading.
Instead of xsec summed over the whole future, zero in on subsequent 1-3 generations, and pour your trust into induction
Obtaining xsec seems like something you don’t just do for your grandkids, or for the 22nd century, but for all the centuries in the future.
I think this is too tall an order. I think that instead of trying something that’s too hard and we’re sure to fail at, we should initialize a class or order of protectors who zero in on getting their 1-3 first successor generations to make it.
In math/computing, we reason about infinite structures (like the whole numbers) by asking what we know about “the base case” (i.e., zero) and by asking what we know about constructions assuming we already know stuff about the ingredients to those constructors (i.e., we would like for what we know about n to be transformed into knowledge about n+1). This is the way I’m thinking about how we can sort of obtain xsec just not all at once. There are no actions we can take to obtain xsec for the 25th century, but if every generation 1. protects their own kids, grandkids, and great-grandkids, and 2. trains and incubates a protector order from among the peers of their kids, grandkids, and great-grandkids, then overall the 25th century is existentially secure.
Yes, the realities of value drift make it really hard to simply trust induction to work. But I think it’s a much better bet than searching for actions you can take to directly impact arbitrary centuries.
I think when scifis like dune or foundation reasoned about this, there was a sort of intergenerational lock-in, people are born into this order, they have destinies and fates and so on, whereas I think in real life people can opt-in and opt-out of it. (but I think the 0 IQ approach to this is to just have kids of your own and indoctrinate them, which may or may not even work).
But overall, I think the argument that accumulating cultural wisdom among cosmopolitans, altruists, whomever is the best lever we have right now is very reasonable (especially if you take seriously the idea that we’re in the alchemy era of longtermism).