I don’t know much about philosophy to participate in the zombies & animal consciousness debate meaningfully. (It takes me hours to get people who think there’s a 30% chance microns have qualia to start to understand the reason why they’re likely not. And the word “consciousness” is a not a good one, as people mean totally different things when they use it. And Yudkowsky eats fish but not octopi and some other seafood, because he think there’s a high enough chance octopi have consciousness. But, this is not my area, this is just something that’s fun to talk about.)
But the critique of FDT doesn’t seem valid at all.
If a simulated copy of you gives in to a threat, it makes sense to identically blackmail the real you. If you don’t, it doesn’t make sense to spend resources on reducing your utility.
If you’re the kind of agent who gives in to blackmail, everyone across the multiverse will extract everything you have from you, and you’ll get quite a negative utility for all the threats you didn’t have resources left to give in to. If you don’t give in to threats, you’ll get much less threats and won’t lose as much.
If you’re an AI trained with machine learning that does what a logical decision theory says you should do, you get a lower loss then an AI that does something else, and you get selected.
I don’t know much about philosophy to participate in the zombies & animal consciousness debate meaningfully. (It takes me hours to get people who think there’s a 30% chance microns have qualia to start to understand the reason why they’re likely not. And the word “consciousness” is a not a good one, as people mean totally different things when they use it. And Yudkowsky eats fish but not octopi and some other seafood, because he think there’s a high enough chance octopi have consciousness. But, this is not my area, this is just something that’s fun to talk about.)
But the critique of FDT doesn’t seem valid at all.
If a simulated copy of you gives in to a threat, it makes sense to identically blackmail the real you. If you don’t, it doesn’t make sense to spend resources on reducing your utility.
If you’re the kind of agent who gives in to blackmail, everyone across the multiverse will extract everything you have from you, and you’ll get quite a negative utility for all the threats you didn’t have resources left to give in to. If you don’t give in to threats, you’ll get much less threats and won’t lose as much.
If you’re an AI trained with machine learning that does what a logical decision theory says you should do, you get a lower loss then an AI that does something else, and you get selected.