“As tagged, this story strikes me as a fable intended to explain one of the mechanisms behind so-called “S-risks”, hellish scenarios that might be a fate worse than the “death” represented by X-risks.”
That’s what I was going for, although I’m aware that I didn’t make this as clear as I should have.
”Of course it’s a little confusing to have the twist with the sentient birds—I think rather than a literal “farmed animal welfare” thing, this is intended to showcase a situation where two different civilizations have very different values.”S
ame thing here. This is what I was trying to get at, but couldn’t think of many other scenarios involving suffering agents where one group of people cares and another doesn’t.”
I don’t really understand why the story is a frame story, or why the main purpose of the ritual is for all the Kunus to feel “collective guilt”… EA is usually trying to steer away from giving the impression that we want everyone to feel guilty all the time.”T
his is really helpful feedback—I didn’t realize that “collective guilt” came across as the point of the story, and I definitely agree that making people feel guilty is counterproductive. I can’t remember why I threw in that phrase (probably because I couldn’t think of anything else), but I’ll change it now.
Totally unrelated point, but I thought the economics of this story were a little wacky.
Yup, definitely more than a “little” wacky :) Maybe using another resource like food or water or land would be better—but then it would have been harder to make the point that each country thought were doing the right thing
.This is a good part of the parable—if S-risks ever occur, the civilizations that commit those galactic war crimes will probably be convinced of their righteousness, and indeed probably won’t even recognize that they are committing a wrong.
This is the central point that I wanted to get across. Whether we’re considering a civilization or an advanced AI, s-risks need not result from intentional malevolence. I’m glad it didn’t get too distorted, but it seems like there are better ways to build a story around this point.
Another side-note: a lot of the ideas behind this story are discussed in the Center on Long-Term Risk’s research agenda. I don’t know whether they would agree with my presentation or conceptualization of those ideas.
“As tagged, this story strikes me as a fable intended to explain one of the mechanisms behind so-called “S-risks”, hellish scenarios that might be a fate worse than the “death” represented by X-risks.”
That’s what I was going for, although I’m aware that I didn’t make this as clear as I should have.
”Of course it’s a little confusing to have the twist with the sentient birds—I think rather than a literal “farmed animal welfare” thing, this is intended to showcase a situation where two different civilizations have very different values.”S
ame thing here. This is what I was trying to get at, but couldn’t think of many other scenarios involving suffering agents where one group of people cares and another doesn’t.”
I don’t really understand why the story is a frame story, or why the main purpose of the ritual is for all the Kunus to feel “collective guilt”… EA is usually trying to steer away from giving the impression that we want everyone to feel guilty all the time.”T
his is really helpful feedback—I didn’t realize that “collective guilt” came across as the point of the story, and I definitely agree that making people feel guilty is counterproductive. I can’t remember why I threw in that phrase (probably because I couldn’t think of anything else), but I’ll change it now.
Totally unrelated point, but I thought the economics of this story were a little wacky.
Yup, definitely more than a “little” wacky :) Maybe using another resource like food or water or land would be better—but then it would have been harder to make the point that each country thought were doing the right thing
.This is a good part of the parable—if S-risks ever occur, the civilizations that commit those galactic war crimes will probably be convinced of their righteousness, and indeed probably won’t even recognize that they are committing a wrong.
This is the central point that I wanted to get across. Whether we’re considering a civilization or an advanced AI, s-risks need not result from intentional malevolence. I’m glad it didn’t get too distorted, but it seems like there are better ways to build a story around this point.
Another side-note: a lot of the ideas behind this story are discussed in the Center on Long-Term Risk’s research agenda. I don’t know whether they would agree with my presentation or conceptualization of those ideas.
Thank you so much for the feedback!