This is a good point which I don’t think I considered enough. This post describes this somewhat.
I do think the signal for which actions are best to take has to come from somewhere. You seem to be suggesting the signal can’t come from the decisionmaker at all since people make decisions before thinking about them. I think that’s possible, but I still think there’s at least some component of people thinking clearly about their decision, even if what they’re actually doing is trying to emulate what those around them would think.
We do want to generate actual signal for what is best, and maybe we can do this somewhat by seriously thinking about things, even if there is certainly a component of motivated reasoning no matter what.
A leaderboard on the forum, ranking users by (some EA organization’s estimate of) their personal impact could give rise to a whole bunch of QALYs.
If this estimate is based on social evaluations, won’t the people making those evaluations have the same problem with motivated reasoning? It’s not clear this is a better source of signal for which actions are best for individuals.
If signal can never truly come from subjective evaluation, it seems like it wouldn’t be solved by moving to social evaluation. One thing that would seem difficult would be concrete, measurable metrics, but this seems way harder in some fields than others.
(Intersubjective evaluation—the combination of multiple people’s subjective evaluations—could plausibly be better than one person’s subjective evaluation, especially if of themselves, assuming ‘errors’ are somewhat uncorrelated.)
This is a good point which I don’t think I considered enough. This post describes this somewhat.
I do think the signal for which actions are best to take has to come from somewhere. You seem to be suggesting the signal can’t come from the decisionmaker at all since people make decisions before thinking about them. I think that’s possible, but I still think there’s at least some component of people thinking clearly about their decision, even if what they’re actually doing is trying to emulate what those around them would think.
We do want to generate actual signal for what is best, and maybe we can do this somewhat by seriously thinking about things, even if there is certainly a component of motivated reasoning no matter what.
If this estimate is based on social evaluations, won’t the people making those evaluations have the same problem with motivated reasoning? It’s not clear this is a better source of signal for which actions are best for individuals.
If signal can never truly come from subjective evaluation, it seems like it wouldn’t be solved by moving to social evaluation. One thing that would seem difficult would be concrete, measurable metrics, but this seems way harder in some fields than others.
(Intersubjective evaluation—the combination of multiple people’s subjective evaluations—could plausibly be better than one person’s subjective evaluation, especially if of themselves, assuming ‘errors’ are somewhat uncorrelated.)