Absolutely agree! :) I think this also extends to “non-EA” causes and projects that do good: sure, they’re not most effective, but they’re still improving or saving lives and that’s praiseworthy.
Relatedly, I think it’s hard to be motivated by subjective expected value, even if that’s what most people think we should maximize. When something turns out to not be successful although it was really high expected value (so not the result of bad analysis), the action should be praised. I’m afraid that the ranking of actions by expected value diverges significantly from the ranking by expected recognition (from oneself and others) and I think this should be somewhat worrying.
Coming back to the post, I also think the drop in recognition is too large when absolute value realized is not maximal. I’m curious to figure out what is the optimal recognition function is (square root of expected value?), but I think that’s a bit besides the point of this post!
I agree with your point about subjective expected value (although realized value is evidence for subjective expected value). I’m not sure I understand the point in your last paragraph?
My interpretation of Siebe’s point is that we shouldn’t try to scale our praise of people with their expected impact. For example, someone who saves one life probably deserves more than one-billionth of the praise we give to Norman Borlaug.
Reasons I agree with this point, if it was what Siebe was saying:
The marginal value of more recognition eventually drops off, to the point where it’s no longer useful as an incentive (or even desired by the person being recognized).
It’s easy to forget how easy it is to be wrong about predicted impact, or to anchor on high numbers when we’re looking at someone impressive. (Maybe Norman Borlaug actually saved only a hundred million lives; it’s very hard to tell what would have happened without him.) Using something like “square root of expected value” lets us hedge against our uncertainty.
Something along those lines. Thanks for interpreting! :)
What I was getting at was mostly that praise/recognition should be a smooth function, such that things not branded as EA still get recognition if they’re only 1/10th effective, instead of the current situation (as I perceive it) that when something is not maximally effective it’s not recognized at all. I notice in myself that I find it harder to assess and recognize the impact of non-EA branded projects.
I expect this is partly because I don’t have access/don’t understand the reasoning so can’t assess the expected value, but partly because I’m normally leaning on status for EA-branded projects. For example, if Will MacAskill is starting a new project I will predict it’s going to be quite effective without knowing anything about the project, while I’d be skeptical about an unknown EA.
Another way of guarding against being demoralized is comparing one’s absolute impact to people outside of EA. For instance, you could take your metric of impact, be it saving lives, improving unit human welfare, reducing animal suffering, or improving the long-term future, and compare the effectiveness of your donation to the average donation. For instance, with the median EA donation of $740, if you thought it were 100 times more effective than the average donation, this would correspond roughly to the 99.9th percentile of income typical donation in the US. And if you thought it were 10,000 times more effective, you could compete with billionaires!
Absolutely agree! :) I think this also extends to “non-EA” causes and projects that do good: sure, they’re not most effective, but they’re still improving or saving lives and that’s praiseworthy.
Relatedly, I think it’s hard to be motivated by subjective expected value, even if that’s what most people think we should maximize. When something turns out to not be successful although it was really high expected value (so not the result of bad analysis), the action should be praised. I’m afraid that the ranking of actions by expected value diverges significantly from the ranking by expected recognition (from oneself and others) and I think this should be somewhat worrying.
Coming back to the post, I also think the drop in recognition is too large when absolute value realized is not maximal. I’m curious to figure out what is the optimal recognition function is (square root of expected value?), but I think that’s a bit besides the point of this post!
I agree with your point about subjective expected value (although realized value is evidence for subjective expected value). I’m not sure I understand the point in your last paragraph?
My interpretation of Siebe’s point is that we shouldn’t try to scale our praise of people with their expected impact. For example, someone who saves one life probably deserves more than one-billionth of the praise we give to Norman Borlaug.
Reasons I agree with this point, if it was what Siebe was saying:
The marginal value of more recognition eventually drops off, to the point where it’s no longer useful as an incentive (or even desired by the person being recognized).
It’s easy to forget how easy it is to be wrong about predicted impact, or to anchor on high numbers when we’re looking at someone impressive. (Maybe Norman Borlaug actually saved only a hundred million lives; it’s very hard to tell what would have happened without him.) Using something like “square root of expected value” lets us hedge against our uncertainty.
Something along those lines. Thanks for interpreting! :)
What I was getting at was mostly that praise/recognition should be a smooth function, such that things not branded as EA still get recognition if they’re only 1/10th effective, instead of the current situation (as I perceive it) that when something is not maximally effective it’s not recognized at all. I notice in myself that I find it harder to assess and recognize the impact of non-EA branded projects.
I expect this is partly because I don’t have access/don’t understand the reasoning so can’t assess the expected value, but partly because I’m normally leaning on status for EA-branded projects. For example, if Will MacAskill is starting a new project I will predict it’s going to be quite effective without knowing anything about the project, while I’d be skeptical about an unknown EA.
Another way of guarding against being demoralized is comparing one’s absolute impact to people outside of EA. For instance, you could take your metric of impact, be it saving lives, improving unit human welfare, reducing animal suffering, or improving the long-term future, and compare the effectiveness of your donation to the average donation. For instance, with the median EA donation of $740, if you thought it were 100 times more effective than the average donation, this would correspond roughly to the 99.9th percentile of income typical donation in the US. And if you thought it were 10,000 times more effective, you could compete with billionaires!