Yup, I understand the general concept of preference falsification. My question is about the specific application. I think it would be helpful if you had a concrete example of where this would be relevant for e.g. malaria bednets or factory farming?
(I am somewhat sympathetic to this request, but really, I don’t think posts on the EA Forum should be that narrow in its scope. Clearly modeling important society-wide dynamics is useful to the broader EA mission. To do the most good you need to model societies and how people coordinate and such. Those things to me seem much more useful than the marginal random fact about factory farming or malaria nets)
I agree that not everything needs to supply random marginal facts about malaria. But at the same time I think concrete examples are useful to keep things grounded, and I think it’s reasonable to adopt a policy of ‘not relevant to EA until at least some evidence to the contrary is provided’. Apparently the OP does have some relevance in mind:
This matters because a lot of EA work involves studying revealed preferences in contexts with strong power dynamics (development economics, animal welfare, etc). If we miss these dynamics, we risk optimizing for the same coercive equilibria we’re trying to fix.
I feel like it would have been good to spend like half the post on this! Maybe I am just being dumb but it is genuinely unclear to me what preference falsification the OP is worried about with animal welfare. Without this the post seems to be written as a long response to a question about sex that as far as I can tell no-one on the forum asked.
The applicability to animal welfare is relatively complex, because it has to do with biases in how we project our agency onto animals when trying to sympathize with them. The applicability to global development is relatively straightforward, as frequently success is defined in terms that at least partially include acceptance of acculturation (schooling & white-collar careers) that causes people to endorse the global development efforts.
You haven’t addressed my question about how this post differs from other abstract theoretical work in EA. It’s a bit odd that you’re reiterating your original criticism without engaging with a direct challenge to its premises.
The push for immediate concrete examples or solutions can actually get in the way of properly understanding problems. When we demand actionable takeaways too early, we risk optimizing for superficial fixes rather than engaging with root causes—which is particularly relevant when discussing preference falsification itself. I think it’s best to separate arguments into independently evaluable modular units when feasible.
I’d still like to hear your thoughts on what distinguishes this kind of theoretical investigation from other abstract work that’s considered EA-relevant.
I’d like to better understand your criteria for relevance. Are you suggesting that EA relevance requires either explicit action items or direct factual support for current EA initiatives? If so, what makes this post different from abstract theoretical posts like this one on infinite ethics in terms of EA relevance?
I’d like to better understand your criteria for relevance.
There was some mental process that lead you to think this was good content to share on the EA forum. What this was was (at least to me, and I suspect to other readers) very opaque—so I suggest you explicitly mention it.
A good example is this post. It also introduces a topic with no explicit action items and doesn’t provide ‘direct factual support for current EA initiatives’. But it is pretty clear why it might be relevant to EA work, and the author explicitly included a section gesturing at the reasons to make it clear.
Are you suggesting that EA relevance requires either explicit action items or direct factual support for current EA initiatives?
Yup, I understand the general concept of preference falsification. My question is about the specific application. I think it would be helpful if you had a concrete example of where this would be relevant for e.g. malaria bednets or factory farming?
(I am somewhat sympathetic to this request, but really, I don’t think posts on the EA Forum should be that narrow in its scope. Clearly modeling important society-wide dynamics is useful to the broader EA mission. To do the most good you need to model societies and how people coordinate and such. Those things to me seem much more useful than the marginal random fact about factory farming or malaria nets)
I agree that not everything needs to supply random marginal facts about malaria. But at the same time I think concrete examples are useful to keep things grounded, and I think it’s reasonable to adopt a policy of ‘not relevant to EA until at least some evidence to the contrary is provided’. Apparently the OP does have some relevance in mind:
I feel like it would have been good to spend like half the post on this! Maybe I am just being dumb but it is genuinely unclear to me what preference falsification the OP is worried about with animal welfare. Without this the post seems to be written as a long response to a question about sex that as far as I can tell no-one on the forum asked.
The applicability to animal welfare is relatively complex, because it has to do with biases in how we project our agency onto animals when trying to sympathize with them. The applicability to global development is relatively straightforward, as frequently success is defined in terms that at least partially include acceptance of acculturation (schooling & white-collar careers) that causes people to endorse the global development efforts.
You haven’t addressed my question about how this post differs from other abstract theoretical work in EA. It’s a bit odd that you’re reiterating your original criticism without engaging with a direct challenge to its premises.
The push for immediate concrete examples or solutions can actually get in the way of properly understanding problems. When we demand actionable takeaways too early, we risk optimizing for superficial fixes rather than engaging with root causes—which is particularly relevant when discussing preference falsification itself. I think it’s best to separate arguments into independently evaluable modular units when feasible.
I’d still like to hear your thoughts on what distinguishes this kind of theoretical investigation from other abstract work that’s considered EA-relevant.
I’d like to better understand your criteria for relevance. Are you suggesting that EA relevance requires either explicit action items or direct factual support for current EA initiatives? If so, what makes this post different from abstract theoretical posts like this one on infinite ethics in terms of EA relevance?
There was some mental process that lead you to think this was good content to share on the EA forum. What this was was (at least to me, and I suspect to other readers) very opaque—so I suggest you explicitly mention it.
A good example is this post. It also introduces a topic with no explicit action items and doesn’t provide ‘direct factual support for current EA initiatives’. But it is pretty clear why it might be relevant to EA work, and the author explicitly included a section gesturing at the reasons to make it clear.
No I am not.