I will focus on x-risk from AI and I will challenge the premise of this being the right way to ask the question.
What is the difference between x-risk and s-risk/​increasing the value of futures? When we mention x-risk with regards to AI we think of humans going extinct but I believe that to be a shortform for wise compassionate decision making. (at least in the EA sphere)
Personally, I think that x-risk and good decision making in terms of moral value might be coupled to each other. We can think of our current governance conditions a bit like correction systems for individual errors. If they pile up, we go off the rail and increase x-risk as well as chances of a bad future.
So a good decision making system should both account for x-risk and value estimation, therefore the solution is the same and it is a false dichotomy?
(I might be wrong and I appreciate the slider question anyway!)
I’ve heard this argument before, but I find it un-compelling in its tractability. If we don’t go extinct, its likely to be a silent victory; most humans on the planet won’t even realise it happened. Individual humans working on X-risk reduction will probably only impact the morals of people around them.
First and foremost, I’m low confidence here.
I will focus on x-risk from AI and I will challenge the premise of this being the right way to ask the question.
What is the difference between x-risk and s-risk/​increasing the value of futures? When we mention x-risk with regards to AI we think of humans going extinct but I believe that to be a shortform for wise compassionate decision making. (at least in the EA sphere)
Personally, I think that x-risk and good decision making in terms of moral value might be coupled to each other. We can think of our current governance conditions a bit like correction systems for individual errors. If they pile up, we go off the rail and increase x-risk as well as chances of a bad future.
So a good decision making system should both account for x-risk and value estimation, therefore the solution is the same and it is a false dichotomy?
(I might be wrong and I appreciate the slider question anyway!)
I’ve heard this argument before, but I find it un-compelling in its tractability. If we don’t go extinct, its likely to be a silent victory; most humans on the planet won’t even realise it happened. Individual humans working on X-risk reduction will probably only impact the morals of people around them.