I agree that not everything needs to supply random marginal facts about malaria. But at the same time I think concrete examples are useful to keep things grounded, and I think it’s reasonable to adopt a policy of ‘not relevant to EA until at least some evidence to the contrary is provided’. Apparently the OP does have some relevance in mind:
This matters because a lot of EA work involves studying revealed preferences in contexts with strong power dynamics (development economics, animal welfare, etc). If we miss these dynamics, we risk optimizing for the same coercive equilibria we’re trying to fix.
I feel like it would have been good to spend like half the post on this! Maybe I am just being dumb but it is genuinely unclear to me what preference falsification the OP is worried about with animal welfare. Without this the post seems to be written as a long response to a question about sex that as far as I can tell no-one on the forum asked.
The applicability to animal welfare is relatively complex, because it has to do with biases in how we project our agency onto animals when trying to sympathize with them. The applicability to global development is relatively straightforward, as frequently success is defined in terms that at least partially include acceptance of acculturation (schooling & white-collar careers) that causes people to endorse the global development efforts.
You haven’t addressed my question about how this post differs from other abstract theoretical work in EA. It’s a bit odd that you’re reiterating your original criticism without engaging with a direct challenge to its premises.
The push for immediate concrete examples or solutions can actually get in the way of properly understanding problems. When we demand actionable takeaways too early, we risk optimizing for superficial fixes rather than engaging with root causes—which is particularly relevant when discussing preference falsification itself. I think it’s best to separate arguments into independently evaluable modular units when feasible.
I’d still like to hear your thoughts on what distinguishes this kind of theoretical investigation from other abstract work that’s considered EA-relevant.
I agree that not everything needs to supply random marginal facts about malaria. But at the same time I think concrete examples are useful to keep things grounded, and I think it’s reasonable to adopt a policy of ‘not relevant to EA until at least some evidence to the contrary is provided’. Apparently the OP does have some relevance in mind:
I feel like it would have been good to spend like half the post on this! Maybe I am just being dumb but it is genuinely unclear to me what preference falsification the OP is worried about with animal welfare. Without this the post seems to be written as a long response to a question about sex that as far as I can tell no-one on the forum asked.
The applicability to animal welfare is relatively complex, because it has to do with biases in how we project our agency onto animals when trying to sympathize with them. The applicability to global development is relatively straightforward, as frequently success is defined in terms that at least partially include acceptance of acculturation (schooling & white-collar careers) that causes people to endorse the global development efforts.
You haven’t addressed my question about how this post differs from other abstract theoretical work in EA. It’s a bit odd that you’re reiterating your original criticism without engaging with a direct challenge to its premises.
The push for immediate concrete examples or solutions can actually get in the way of properly understanding problems. When we demand actionable takeaways too early, we risk optimizing for superficial fixes rather than engaging with root causes—which is particularly relevant when discussing preference falsification itself. I think it’s best to separate arguments into independently evaluable modular units when feasible.
I’d still like to hear your thoughts on what distinguishes this kind of theoretical investigation from other abstract work that’s considered EA-relevant.