To some extent I reject the question as not-super-action-guiding (I think that a lot of work people do has impacts on both things).
But taking it at face value, I think that AI x-risk is almost all about increasing the value of futures where “we” survive (even if all the humans die), and deserves most attention. Literal extinction of earth-originating intelligence is mostly a risk from future war, which I do think deserves some real attention, but isn’t the main priority right now.
I think Owen is voting correctly Robi—he disagrees that there should be more work on extinction reduction before there is more work on improving the value of the future. (to complicate this, he is understanding working on AI x-risk is mostly about increasing the value of the future, because, in his view, it isn’t likely to lead to extinction).
Apologies if the “agree” “disagree” labelling is unclear—we’re thinking of ways to make it more parsable.
I think most AI x-risk (in expectation) doesn’t lead to human extinction, but a noticeable fraction does
But a lot even of the fraction that leads to human extinction seems to me like it probably doesn’t count as “extinction” by the standards of this question, since it still has the earth-originating intelligence which can go out and do stuff in the universe
However, I sort of expect people to naturally count this as “extinction”?
Since it wasn’t cruxy for my rough overall position, I didn’t resolve this last question before voting, although maybe it would get me to tweak my position a little.
To some extent I reject the question as not-super-action-guiding (I think that a lot of work people do has impacts on both things).
But taking it at face value, I think that AI x-risk is almost all about increasing the value of futures where “we” survive (even if all the humans die), and deserves most attention. Literal extinction of earth-originating intelligence is mostly a risk from future war, which I do think deserves some real attention, but isn’t the main priority right now.
Hope I’m not misreading your comment, but I think you might have voted incorrectly, as if the scale is flipped.
I think Owen is voting correctly Robi—he disagrees that there should be more work on extinction reduction before there is more work on improving the value of the future. (to complicate this, he is understanding working on AI x-risk is mostly about increasing the value of the future, because, in his view, it isn’t likely to lead to extinction).
Apologies if the “agree” “disagree” labelling is unclear—we’re thinking of ways to make it more parsable.
This is right. But to add even more complication:
I think most AI x-risk (in expectation) doesn’t lead to human extinction, but a noticeable fraction does
But a lot even of the fraction that leads to human extinction seems to me like it probably doesn’t count as “extinction” by the standards of this question, since it still has the earth-originating intelligence which can go out and do stuff in the universe
However, I sort of expect people to naturally count this as “extinction”?
Since it wasn’t cruxy for my rough overall position, I didn’t resolve this last question before voting, although maybe it would get me to tweak my position a little.
Ah yes I get it now. Thanks!
No worries!