Why are popularity-contest dynamics harmful, precisely? I suppose one argument is: If you are looking for the best new argument against psychedelics, popularity-contest dynamics are likely to get you the argument that resonates with the most people, or perhaps the argument that the most people can understand, or the argument that the most people had in their head already. These could still be useful to learn about, though.
For judging, you could always get a third party to judge. I’m also curious about a prize format like “$X to anyone who’s able to change my mind substantially about Y”. (This might be the closest thing I’ve seen to that so far.) Or a prize format which attempts to measure & reward novelty/variety among the responses somehow.
You mentioned status quo bias. It’s interesting that all 3 of the prizes you link at the top are cases where people presented a new EA initiative and paid the community for the best available critiques. One idea for evening things out is to offer prizes for the best arguments against established EA donation targets! I do think you’re right that more outsider-y causes are asked to meet a higher standard of support.
For example, this recent post on EA as an ideology did very little to critique global poverty, but there’s a provocative argument that our focus on global poverty is one of the most ideological aspects of EA: It is easily the most popular EA cause area, but my impression is that less has been written to justify a focus on global poverty than other cause areas—it seems to have been “grandfathered in” due to the drowning child argument.
Similarly, we could turn the tables on the EA Hotel discussion by asking mainstream EA orgs to justify why they pay their employees such high salaries to live in high cost of living areas. I’ve also heard tales through the grapevine about the perverse incentives created by the need to fundraise for projects in EA, and my perception is that this is a big issue in the cause area I’m most excited about (AI safety). (Here is a recent LW thread on this topic.)
Oh I thought you were talking about popularity contest dynamics for arguments, not causes.
Sounds like you are positing a Matthew Effect where causes which many people are already working on will tend to have greater awareness (due to greater visibility) and also greater credibility (so many people are working on this cause, they must be on to something! Newcomers to EA will probably be especially tempted by causes which many people are already working on, since they won’t feel they are in a position to evaluate causes for themselves.)
If true, an unfortunate side effect would be that neglected causes tend to remain neglected.
I think in practice how things work nowadays is that there are a few organizations in the community (OpenPhil, 80K) which have a lot of credibility and do their own in-depth evaluation of causes, and EA resources end up getting directed based on their evaluations. I’m not sure this is such a bad setup overall.
Nice post!
Why are popularity-contest dynamics harmful, precisely? I suppose one argument is: If you are looking for the best new argument against psychedelics, popularity-contest dynamics are likely to get you the argument that resonates with the most people, or perhaps the argument that the most people can understand, or the argument that the most people had in their head already. These could still be useful to learn about, though.
For judging, you could always get a third party to judge. I’m also curious about a prize format like “$X to anyone who’s able to change my mind substantially about Y”. (This might be the closest thing I’ve seen to that so far.) Or a prize format which attempts to measure & reward novelty/variety among the responses somehow.
You mentioned status quo bias. It’s interesting that all 3 of the prizes you link at the top are cases where people presented a new EA initiative and paid the community for the best available critiques. One idea for evening things out is to offer prizes for the best arguments against established EA donation targets! I do think you’re right that more outsider-y causes are asked to meet a higher standard of support.
For example, this recent post on EA as an ideology did very little to critique global poverty, but there’s a provocative argument that our focus on global poverty is one of the most ideological aspects of EA: It is easily the most popular EA cause area, but my impression is that less has been written to justify a focus on global poverty than other cause areas—it seems to have been “grandfathered in” due to the drowning child argument.
Similarly, we could turn the tables on the EA Hotel discussion by asking mainstream EA orgs to justify why they pay their employees such high salaries to live in high cost of living areas. I’ve also heard tales through the grapevine about the perverse incentives created by the need to fundraise for projects in EA, and my perception is that this is a big issue in the cause area I’m most excited about (AI safety). (Here is a recent LW thread on this topic.)
This is a great idea!
Whoa, I didn’t know about this one. Thanks for the link!
A similar sort of thing is a big part of the reason why Eliezer had difficulty advocating for AI safety, back in the 2000s.
Oh I thought you were talking about popularity contest dynamics for arguments, not causes.
Sounds like you are positing a Matthew Effect where causes which many people are already working on will tend to have greater awareness (due to greater visibility) and also greater credibility (so many people are working on this cause, they must be on to something! Newcomers to EA will probably be especially tempted by causes which many people are already working on, since they won’t feel they are in a position to evaluate causes for themselves.)
If true, an unfortunate side effect would be that neglected causes tend to remain neglected.
I think in practice how things work nowadays is that there are a few organizations in the community (OpenPhil, 80K) which have a lot of credibility and do their own in-depth evaluation of causes, and EA resources end up getting directed based on their evaluations. I’m not sure this is such a bad setup overall.
Yeah it doesn’t seem terrible. It probably misses a lot of upside, though.