My interpretation of Siebeās point is that we shouldnāt try to scale our praise of people with their expected impact. For example, someone who saves one life probably deserves more than one-billionth of the praise we give to Norman Borlaug.
Reasons I agree with this point, if it was what Siebe was saying:
The marginal value of more recognition eventually drops off, to the point where itās no longer useful as an incentive (or even desired by the person being recognized).
Itās easy to forget how easy it is to be wrong about predicted impact, or to anchor on high numbers when weāre looking at someone impressive. (Maybe Norman Borlaug actually saved only a hundred million lives; itās very hard to tell what would have happened without him.) Using something like āsquare root of expected valueā lets us hedge against our uncertainty.
Something along those lines. Thanks for interpreting! :)
What I was getting at was mostly that praise/ārecognition should be a smooth function, such that things not branded as EA still get recognition if theyāre only 1/ā10th effective, instead of the current situation (as I perceive it) that when something is not maximally effective itās not recognized at all. I notice in myself that I find it harder to assess and recognize the impact of non-EA branded projects.
I expect this is partly because I donāt have access/ādonāt understand the reasoning so canāt assess the expected value, but partly because Iām normally leaning on status for EA-branded projects. For example, if Will MacAskill is starting a new project I will predict itās going to be quite effective without knowing anything about the project, while Iād be skeptical about an unknown EA.
Another way of guarding against being demoralized is comparing oneās absolute impact to people outside of EA. For instance, you could take your metric of impact, be it saving lives, improving unit human welfare, reducing animal suffering, or improving the long-term future, and compare the effectiveness of your donation to the average donation. For instance, with the median EA donation of $740, if you thought it were 100 times more effective than the average donation, this would correspond roughly to the 99.9th percentile of income typical donation in the US. And if you thought it were 10,000 times more effective, you could compete with billionaires!
My interpretation of Siebeās point is that we shouldnāt try to scale our praise of people with their expected impact. For example, someone who saves one life probably deserves more than one-billionth of the praise we give to Norman Borlaug.
Reasons I agree with this point, if it was what Siebe was saying:
The marginal value of more recognition eventually drops off, to the point where itās no longer useful as an incentive (or even desired by the person being recognized).
Itās easy to forget how easy it is to be wrong about predicted impact, or to anchor on high numbers when weāre looking at someone impressive. (Maybe Norman Borlaug actually saved only a hundred million lives; itās very hard to tell what would have happened without him.) Using something like āsquare root of expected valueā lets us hedge against our uncertainty.
Something along those lines. Thanks for interpreting! :)
What I was getting at was mostly that praise/ārecognition should be a smooth function, such that things not branded as EA still get recognition if theyāre only 1/ā10th effective, instead of the current situation (as I perceive it) that when something is not maximally effective itās not recognized at all. I notice in myself that I find it harder to assess and recognize the impact of non-EA branded projects.
I expect this is partly because I donāt have access/ādonāt understand the reasoning so canāt assess the expected value, but partly because Iām normally leaning on status for EA-branded projects. For example, if Will MacAskill is starting a new project I will predict itās going to be quite effective without knowing anything about the project, while Iād be skeptical about an unknown EA.
Another way of guarding against being demoralized is comparing oneās absolute impact to people outside of EA. For instance, you could take your metric of impact, be it saving lives, improving unit human welfare, reducing animal suffering, or improving the long-term future, and compare the effectiveness of your donation to the average donation. For instance, with the median EA donation of $740, if you thought it were 100 times more effective than the average donation, this would correspond roughly to the 99.9th percentile of income typical donation in the US. And if you thought it were 10,000 times more effective, you could compete with billionaires!