Dawn—an important issue, and I don’t know the answer.
I haven’t read much about S-risk, so treat my comments as very naive.
My hunch is that people who have read the science fiction novel ‘Surface Detail’ (2010) by Iain M. Banks are likely to take S-risk seriously; and those who haven’t, not so much. The novel portrays a world in which people’s mind-states are uploaded and updated, and if their bodies die, and they’re considered ‘bad people’, their mind-states are tormented in a virtual hell for subjective eons. It’s a harrowing read.
But, the premise depends on psychopathic religious fundamentalists using machine intelligences to impose the digital hell on digital people, to align with their theological notions of who deserves punishment.
Outside the realm of vengeful religions leading powerful psychopaths to impose digital suffering on digital sentiences, it’s somewhat difficult to imagine any realistic situations in which any entities would want to deliberately inflict wide-scale suffering on others.
And, even if they did, others who are less psychopathic might notice, and intervene to save the tormented digital souls (as they did in the novel).
I haven’t read the novel, so I can’t comment on that part but, as I commented above, “I can think of plenty of scenarios that are ‘realistic’ by AI safety standards… Scenarios that are inspired by stuff that terrorists do all the time when they’re fighting powerful governments, so lots of precedents in history, and whose realism only suffers a bit because they would not be technically possible for humans with today’s technology.”
My guess is that people disagree with the notion that the novel is a significant reason for most people who take s-risks seriously. I too was a bit puzzled by that part, but I found it enlightening as a comment even if I disagreed with it.
My impression is that readers of the EA forum have, since 2022, become much more prone to downvoting stuff just because they disagree with it. LW seems to be slightly better at understanding that “karma” and “disagreement” are separate things, and that you should up-karma stuff if you personally benefited from reading it, and separately up-agree or down-agree depending on whether you think it’s right or wrong.
Maybe I’m wrong, but perhaps the forum could use a few reminders to let people know the purpose of these buttons. Like an opt-out confirmation popup with some guiding principles for when you should up or downvote each dimension.
I agree that it would be nice on EA Forum for people to stay disciplined about upvotes versus agree-votes.
It would also be very helpful if there was a norm of people disagree-voting offering, at least some of the time, explicit reasons for their disagreement—even if only brief comments.
My mention of the Banks novel wasn’t intended to be taken too literally as an explanation for why some people take S-risk seriously. (Maybe that was seen as dismissive or mocking, but it certainly wasn’t meant to be.) For me personally, Surface Detail was just the only scenario I’ve seen portrayed in fiction, so far, where there would be any sustainable rationale for AIs to impose long-term suffering on sentient beings.
Dawn—an important issue, and I don’t know the answer.
I haven’t read much about S-risk, so treat my comments as very naive.
My hunch is that people who have read the science fiction novel ‘Surface Detail’ (2010) by Iain M. Banks are likely to take S-risk seriously; and those who haven’t, not so much. The novel portrays a world in which people’s mind-states are uploaded and updated, and if their bodies die, and they’re considered ‘bad people’, their mind-states are tormented in a virtual hell for subjective eons. It’s a harrowing read.
But, the premise depends on psychopathic religious fundamentalists using machine intelligences to impose the digital hell on digital people, to align with their theological notions of who deserves punishment.
Outside the realm of vengeful religions leading powerful psychopaths to impose digital suffering on digital sentiences, it’s somewhat difficult to imagine any realistic situations in which any entities would want to deliberately inflict wide-scale suffering on others.
And, even if they did, others who are less psychopathic might notice, and intervene to save the tormented digital souls (as they did in the novel).
I haven’t read the novel, so I can’t comment on that part but, as I commented above, “I can think of plenty of scenarios that are ‘realistic’ by AI safety standards… Scenarios that are inspired by stuff that terrorists do all the time when they’re fighting powerful governments, so lots of precedents in history, and whose realism only suffers a bit because they would not be technically possible for humans with today’s technology.”
PS—for folks who disagree-voted on this post, I’m curious what you disagreed with?
My guess is that people disagree with the notion that the novel is a significant reason for most people who take s-risks seriously. I too was a bit puzzled by that part, but I found it enlightening as a comment even if I disagreed with it.
My impression is that readers of the EA forum have, since 2022, become much more prone to downvoting stuff just because they disagree with it. LW seems to be slightly better at understanding that “karma” and “disagreement” are separate things, and that you should up-karma stuff if you personally benefited from reading it, and separately up-agree or down-agree depending on whether you think it’s right or wrong.
Maybe I’m wrong, but perhaps the forum could use a few reminders to let people know the purpose of these buttons. Like an opt-out confirmation popup with some guiding principles for when you should up or downvote each dimension.
rime—thanks for your helpful reply.
I agree that it would be nice on EA Forum for people to stay disciplined about upvotes versus agree-votes.
It would also be very helpful if there was a norm of people disagree-voting offering, at least some of the time, explicit reasons for their disagreement—even if only brief comments.
My mention of the Banks novel wasn’t intended to be taken too literally as an explanation for why some people take S-risk seriously. (Maybe that was seen as dismissive or mocking, but it certainly wasn’t meant to be.) For me personally, Surface Detail was just the only scenario I’ve seen portrayed in fiction, so far, where there would be any sustainable rationale for AIs to impose long-term suffering on sentient beings.