I meant broad sense existential risk, not just extinction. The first graph is supposed to represent a specific worldview where the relevant form of existential risk is extinction, and extinction is reasonably likely. In particular I had Eliezer Yudkowsky’s views about AI in mind. (But I decided to draw a graph with the transition around 50% rather than his 99% or so, because I thought it would be clearer.) One could certainly draw many more graphs, or change the descriptions of the existing graphs, without representing everyone’s thoughts on the function mapping percentile performance to total realized value.
Thanks for explaining how you think about this issue, I will have to consider that more. My first thought is that I’m not utilitarian enough to say that a universe full of happy biological beings is ~0.0% as good as if they were digital, even conditioning on being biological being the wrong decision. But maybe I would agree on other possible disjunctive traps.
I meant broad sense existential risk, not just extinction. The first graph is supposed to represent a specific worldview where the relevant form of existential risk is extinction, and extinction is reasonably likely. In particular I had Eliezer Yudkowsky’s views about AI in mind. (But I decided to draw a graph with the transition around 50% rather than his 99% or so, because I thought it would be clearer.) One could certainly draw many more graphs, or change the descriptions of the existing graphs, without representing everyone’s thoughts on the function mapping percentile performance to total realized value.
Thanks for explaining how you think about this issue, I will have to consider that more. My first thought is that I’m not utilitarian enough to say that a universe full of happy biological beings is ~0.0% as good as if they were digital, even conditioning on being biological being the wrong decision. But maybe I would agree on other possible disjunctive traps.