The applicability to animal welfare is relatively complex, because it has to do with biases in how we project our agency onto animals when trying to sympathize with them. The applicability to global development is relatively straightforward, as frequently success is defined in terms that at least partially include acceptance of acculturation (schooling & white-collar careers) that causes people to endorse the global development efforts.
You haven’t addressed my question about how this post differs from other abstract theoretical work in EA. It’s a bit odd that you’re reiterating your original criticism without engaging with a direct challenge to its premises.
The push for immediate concrete examples or solutions can actually get in the way of properly understanding problems. When we demand actionable takeaways too early, we risk optimizing for superficial fixes rather than engaging with root causes—which is particularly relevant when discussing preference falsification itself. I think it’s best to separate arguments into independently evaluable modular units when feasible.
I’d still like to hear your thoughts on what distinguishes this kind of theoretical investigation from other abstract work that’s considered EA-relevant.
The applicability to animal welfare is relatively complex, because it has to do with biases in how we project our agency onto animals when trying to sympathize with them. The applicability to global development is relatively straightforward, as frequently success is defined in terms that at least partially include acceptance of acculturation (schooling & white-collar careers) that causes people to endorse the global development efforts.
You haven’t addressed my question about how this post differs from other abstract theoretical work in EA. It’s a bit odd that you’re reiterating your original criticism without engaging with a direct challenge to its premises.
The push for immediate concrete examples or solutions can actually get in the way of properly understanding problems. When we demand actionable takeaways too early, we risk optimizing for superficial fixes rather than engaging with root causes—which is particularly relevant when discussing preference falsification itself. I think it’s best to separate arguments into independently evaluable modular units when feasible.
I’d still like to hear your thoughts on what distinguishes this kind of theoretical investigation from other abstract work that’s considered EA-relevant.