Mogensen and Wiblin discuss this problem in this podcast episode, fwiw. That’s all I know, sorry.
Btw, if you really endorse your solution (and ignore potential aliens colonizing our corner of the universe someday, maybe), I think you should find deeply problematic GCP’s take (and the take of most people on this Forum) on the value of reducing X-risks. Do you agree or do you believe the future of our light cone with humanity around doing things will not contain any suffering (or anything that would be worse than the suffering of one Jones in the “Transmitter Room Problem”)? You got me curious.
I’m not sure I follow. Are you saying that accepting that there is a finite amount of potential suffering in our future would imply x-risk reduction being problematic?
Sorry, that wasn’t super clear. I’m saying that if you believe that there is more total suffering in a human-controlled future than in a future not controlled by humans, X-risk reduction would be problematic from the point of view you defend in your post.
So if you endorse this point of view, you should either believe x-risk reduction is bad or that there isn’t more total suffering in a human-controlled future. Believing either of those would be unusual (although this doesn’t mean you’re wrong) which is why I was curious.
Mogensen and Wiblin discuss this problem in this podcast episode, fwiw. That’s all I know, sorry.
Btw, if you really endorse your solution (and ignore potential aliens colonizing our corner of the universe someday, maybe), I think you should find deeply problematic GCP’s take (and the take of most people on this Forum) on the value of reducing X-risks. Do you agree or do you believe the future of our light cone with humanity around doing things will not contain any suffering (or anything that would be worse than the suffering of one Jones in the “Transmitter Room Problem”)? You got me curious.
I’m not sure I follow. Are you saying that accepting that there is a finite amount of potential suffering in our future would imply x-risk reduction being problematic?
Sorry, that wasn’t super clear. I’m saying that if you believe that there is more total suffering in a human-controlled future than in a future not controlled by humans, X-risk reduction would be problematic from the point of view you defend in your post.
So if you endorse this point of view, you should either believe x-risk reduction is bad or that there isn’t more total suffering in a human-controlled future. Believing either of those would be unusual (although this doesn’t mean you’re wrong) which is why I was curious.