Okay forget what I said, I sure can tie myself up in knots. Here’s another attempt:
If a person is faced with the decision to either save 100 out of 300 people for sure, or have a 60% chance of saving everyone, they are likely (in my experience asking friends) to answer something like “I don’t gamble with human lives” or “I don’t see the point of thought experiments like this”. Eliezer Yudkowsky claims in his “something to protect” post that if those same people were faced with this problem and a loved one was among the 300, they would have more incentive to ‘shut up and multiply’. People are more likely to choose what has more expected value if they are more entangled with the end result (and less likely to eg signal indignation at having to gamble with lives).
I see this in practice, and I’m sure you can relate: I’ve often been told by family members that putting numbers on altruism takes the whole spirit out of it, or that “malaria isn’t the only important thing, coral is important too! ” , or that “money is complicated and you can’t equate wasted money with wasted opportunities for altruism”.
These ideas look perfectly reasonable to them but I don’t think they would hold up for a second if their child had cancer: “putting numbers on cancer treatment for your child takes the whole spirit out of saving them (like you could put a number on love)”, or “your child surviving isn’t the only important thing, coral is important too” or “money is complicated, and you can’t equate wasting money with spending less on your child’s treatment”.
Those might be a bit personal. My point is that entangling the outcome with something you care about makes you more likely to try making the right choice. Perhaps I shouldn’t have used the word “rationality” at all. “Rationality” might be a valuable component in making the right choice, but for my purposes I only care about making the right choice no matter how you get there.
The practical insight is that you should start by thinking about what you actually care about, and then backchain from there. If I start off deciding that I want to maximize my family’s odds of survival, I think I am more likely to take AI risk seriously (in no small part, I think, because signalling sanity by scoffing at ‘sci-fi scenarios’ is no longer something that matters).
I am designing a survey I will send tonight to some university students to test this claim.
Okay forget what I said, I sure can tie myself up in knots. Here’s another attempt:
If a person is faced with the decision to either save 100 out of 300 people for sure, or have a 60% chance of saving everyone, they are likely (in my experience asking friends) to answer something like “I don’t gamble with human lives” or “I don’t see the point of thought experiments like this”. Eliezer Yudkowsky claims in his “something to protect” post that if those same people were faced with this problem and a loved one was among the 300, they would have more incentive to ‘shut up and multiply’. People are more likely to choose what has more expected value if they are more entangled with the end result (and less likely to eg signal indignation at having to gamble with lives).
I see this in practice, and I’m sure you can relate: I’ve often been told by family members that putting numbers on altruism takes the whole spirit out of it, or that “malaria isn’t the only important thing, coral is important too! ” , or that “money is complicated and you can’t equate wasted money with wasted opportunities for altruism”.
These ideas look perfectly reasonable to them but I don’t think they would hold up for a second if their child had cancer: “putting numbers on cancer treatment for your child takes the whole spirit out of saving them (like you could put a number on love)”, or “your child surviving isn’t the only important thing, coral is important too” or “money is complicated, and you can’t equate wasting money with spending less on your child’s treatment”.
Those might be a bit personal. My point is that entangling the outcome with something you care about makes you more likely to try making the right choice. Perhaps I shouldn’t have used the word “rationality” at all. “Rationality” might be a valuable component in making the right choice, but for my purposes I only care about making the right choice no matter how you get there.
The practical insight is that you should start by thinking about what you actually care about, and then backchain from there. If I start off deciding that I want to maximize my family’s odds of survival, I think I am more likely to take AI risk seriously (in no small part, I think, because signalling sanity by scoffing at ‘sci-fi scenarios’ is no longer something that matters).
I am designing a survey I will send tonight to some university students to test this claim.
Nate Soares excellently describes this process.