I don’t have much time to respond here and haven’t thought much about my thesis since I wrote it almost seven years ago (and would probably find much of it embarrassing now in the light of the replication crisis + my better grasp on philosophy). A few notes:
I think that humans do something akin to CEV as part of our daily lives — we experience impulses, then rein them back (“take a deep breath”, “think of what X would say”, “put yourself in their shoes”...) It seems like we’re usually happier with our choices when we’ve taken more time to think them over (though this isn’t always true — people do argue themselves out of doing what they later believe they should have done). I’ve always pictured CEV as an extremely advanced version of this, but one that we would still recognize as descending from that original process.
I also see ethics as a subset of desires.
I don’t think “System 1” and “System 2″ are real things in a sense that would make “System 2 has no desires” a sensible statement. Humans do a lot of things with our minds, those things have properties, and we try to split them up based on those properties — but this doesn’t mean that the resulting clusters have to refer to real things. Even when I’m doing my deepest “EA thinking”, there’s always a thread of “why” I can pull that leads back to something I care about “just because”, and it seems like the same thing would be true for every other mental process. (And by “a thread”, I mean “multiple threads”, and sometimes I’ll look at where all the threads end and wind up needing to make an arbitrary about which “just because” thing to prioritize.)
I don’t have much time to respond here and haven’t thought much about my thesis since I wrote it almost seven years ago (and would probably find much of it embarrassing now in the light of the replication crisis + my better grasp on philosophy). A few notes:
I think that humans do something akin to CEV as part of our daily lives — we experience impulses, then rein them back (“take a deep breath”, “think of what X would say”, “put yourself in their shoes”...) It seems like we’re usually happier with our choices when we’ve taken more time to think them over (though this isn’t always true — people do argue themselves out of doing what they later believe they should have done). I’ve always pictured CEV as an extremely advanced version of this, but one that we would still recognize as descending from that original process.
I also see ethics as a subset of desires.
I don’t think “System 1” and “System 2″ are real things in a sense that would make “System 2 has no desires” a sensible statement. Humans do a lot of things with our minds, those things have properties, and we try to split them up based on those properties — but this doesn’t mean that the resulting clusters have to refer to real things. Even when I’m doing my deepest “EA thinking”, there’s always a thread of “why” I can pull that leads back to something I care about “just because”, and it seems like the same thing would be true for every other mental process. (And by “a thread”, I mean “multiple threads”, and sometimes I’ll look at where all the threads end and wind up needing to make an arbitrary about which “just because” thing to prioritize.)