Just to be clear: my rough simplification of the “Pinker hypothesis” isn’t that people have an all-around-peaceful psychology. It is, as you say, a hypothesis about how far we expect recent trends toward peace to continue. And in particular, it’s the hypothesis that there’s no hard lower bound to the “violence level” we can reach, so that, as we make technological and social progress, we will ultimately approach a state of being perfectly peaceful. The alternative hypothesis I’m contrasting this with is a future in which can we can only ever get things down to, say, one world war per century. If the former hypothesis isn’t actually Pinker’s, then my sincere apologies! But I really just mean to outline two hypotheses one might be uncertain between, in order to illustrate the qualitative point about the conditional value of the future.
That said, I certainly agree that moral circle expansion seems like a good thing to do, for making the world better conditional on survival, without running the risk of “saving a bad world”. And I’m excited by Sentience’s work on it. Also, I think it might have the benefit of lowering x-risk in the long run (if it really succeeds, we’ll have fewer wars and such). And, come to think of it, it has the nice feature that, since it will only lower x-risk if it succeeds in other ways, it disproportionately saves “good worlds” in the end.
Thanks!
Just to be clear: my rough simplification of the “Pinker hypothesis” isn’t that people have an all-around-peaceful psychology. It is, as you say, a hypothesis about how far we expect recent trends toward peace to continue. And in particular, it’s the hypothesis that there’s no hard lower bound to the “violence level” we can reach, so that, as we make technological and social progress, we will ultimately approach a state of being perfectly peaceful. The alternative hypothesis I’m contrasting this with is a future in which can we can only ever get things down to, say, one world war per century. If the former hypothesis isn’t actually Pinker’s, then my sincere apologies! But I really just mean to outline two hypotheses one might be uncertain between, in order to illustrate the qualitative point about the conditional value of the future.
That said, I certainly agree that moral circle expansion seems like a good thing to do, for making the world better conditional on survival, without running the risk of “saving a bad world”. And I’m excited by Sentience’s work on it. Also, I think it might have the benefit of lowering x-risk in the long run (if it really succeeds, we’ll have fewer wars and such). And, come to think of it, it has the nice feature that, since it will only lower x-risk if it succeeds in other ways, it disproportionately saves “good worlds” in the end.