Your description of practical critiques being difficult to steelman with only anecdata available feels like the classic challenge of balancing type I and type II error when reality is underpowered.
In the context of a contest encouraging effective altruism critiques, I think we maybe want to have a much higher tolerance than usual for type I error in order to get less type II error (I am thinking of the null hypothesis as “critique is false”, so the type I error would be accepting a false critique and type II error would be rejecting a true critique).
Obviously, there needs to be some chance that the critique holds. However, it seems very valuable to encourage critiques that would be a big deal if true even if we’re very uncertain about the assumptions, especially if the assumptions are clear and possible to test with some amount of further investment (eg. by adding a question to next year’s EA survey or getting some local groups to ask their new attendees to fill out an anonymous survey on their impressions of the group).
This makes me think that maybe a good format for EA critiques is a list of assumptions (maybe even with the authors’ credences that they all hold and their reasoning), and then the outlined critique if those assumptions are true. If criticisms clearly lay out their assumptions, even if, say, we guess that there is a, say, 70% chance that the assumptions don’t hold, in the 30% of possible worlds where they do hold up (assuming our guess was well-calibrated :P), having the hypothetical implications written up still seems very valuable (to help us work out if it’s worth investigating these assumptions further/to get us to pay more attention to evidence for and against the hypothesis that we live in that 30% world/to get us to think about whether there are low-cost actions we can take just in case we live in that 30% world).
Hmmm I think it’s actually really hard to critique EA in a way that EAs will find convincing. I wrote about this below. Curious for feedback: https://twitter.com/tyleralterman/status/1511364183840989194?s=21&t=n_isE2vL3UIJsassqyLs8w
“Not being easy to criticise even if the criticism is valid” seems like an excellent critique of effective altruism.
Your description of practical critiques being difficult to steelman with only anecdata available feels like the classic challenge of balancing type I and type II error when reality is underpowered.
In the context of a contest encouraging effective altruism critiques, I think we maybe want to have a much higher tolerance than usual for type I error in order to get less type II error (I am thinking of the null hypothesis as “critique is false”, so the type I error would be accepting a false critique and type II error would be rejecting a true critique).
Obviously, there needs to be some chance that the critique holds. However, it seems very valuable to encourage critiques that would be a big deal if true even if we’re very uncertain about the assumptions, especially if the assumptions are clear and possible to test with some amount of further investment (eg. by adding a question to next year’s EA survey or getting some local groups to ask their new attendees to fill out an anonymous survey on their impressions of the group).
This makes me think that maybe a good format for EA critiques is a list of assumptions (maybe even with the authors’ credences that they all hold and their reasoning), and then the outlined critique if those assumptions are true. If criticisms clearly lay out their assumptions, even if, say, we guess that there is a, say, 70% chance that the assumptions don’t hold, in the 30% of possible worlds where they do hold up (assuming our guess was well-calibrated :P), having the hypothetical implications written up still seems very valuable (to help us work out if it’s worth investigating these assumptions further/to get us to pay more attention to evidence for and against the hypothesis that we live in that 30% world/to get us to think about whether there are low-cost actions we can take just in case we live in that 30% world).