I happen to have made a not-very-good model a month or so ago to try to get a sense of how much the possibility of future species that care about x-risks impacts x-risk today. It’s here, and it has a bunch of issues (like assuming that it will take the same amount of time from now for a new species that it took for humans to evolve since the first neuron, assuming that all of Ord’s x-risks don’t reduce the possibility of future moral agents evolving etc.), and possibly doesn’t even get at the important things mentioned in this post.
But based on the relatively bad assumptions in it, it spat out that if we generally expect moral agents to evolve who reach Ord’s 16% 100 year x-risk every 500 million years or so (assuming an existential event happens), and that most the value of the future is beyond the next 0.8 to 1.2B years, then we ought to adjust Ord’s figure down to 9.8% to 12%.
I don’t think either the figure / approach in that should be taken at all seriously though, as I spent only a couple minutes on it and didn’t think at all about better ways to try to do this—just writing this explanation of it has shown me a lot of ways in which it is bad. It just seemed relevant to this post and I wasn’t going to do anything else with it :).
Thanks for your comment! I’m very interested to hear about a modelling approach. I’ll look at your model and will probably have questions in the near-future!
Thanks for sharing this!
I happen to have made a not-very-good model a month or so ago to try to get a sense of how much the possibility of future species that care about x-risks impacts x-risk today. It’s here, and it has a bunch of issues (like assuming that it will take the same amount of time from now for a new species that it took for humans to evolve since the first neuron, assuming that all of Ord’s x-risks don’t reduce the possibility of future moral agents evolving etc.), and possibly doesn’t even get at the important things mentioned in this post.
But based on the relatively bad assumptions in it, it spat out that if we generally expect moral agents to evolve who reach Ord’s 16% 100 year x-risk every 500 million years or so (assuming an existential event happens), and that most the value of the future is beyond the next 0.8 to 1.2B years, then we ought to adjust Ord’s figure down to 9.8% to 12%.
I don’t think either the figure / approach in that should be taken at all seriously though, as I spent only a couple minutes on it and didn’t think at all about better ways to try to do this—just writing this explanation of it has shown me a lot of ways in which it is bad. It just seemed relevant to this post and I wasn’t going to do anything else with it :).
Thanks for your comment! I’m very interested to hear about a modelling approach. I’ll look at your model and will probably have questions in the near-future!
Hey! Your link sends us to this very post. Is this intentional?
Nope—fixed. Thanks for pointing that out.