It might be worth saying even when making clear that QALYs aren’t the only things that EAs care about that even welfare maximisation doesn’t have no be the only thing EAs care about; this might vary based on one’s conception to EA, but given the movement at least currently accommodates for non-utilitarians (and I hope it continues to do so!) we don’t want to fall into a WALY-maximisation trap any more than a QALY-maximisation trap.
That is to say: this post tells us, “look, specifically in the realm of health, there does seem to be ways of measuring things, but we actually care about measuring welfare”.
I’d suggest we say instead: “look, specifically in the realm of health, there does seem to be ways of measuring things, but we might actually want to measure any given value we might care about”.
Thanks for the post.
It might be worth saying even when making clear that QALYs aren’t the only things that EAs care about that even welfare maximisation doesn’t have no be the only thing EAs care about; this might vary based on one’s conception to EA, but given the movement at least currently accommodates for non-utilitarians (and I hope it continues to do so!) we don’t want to fall into a WALY-maximisation trap any more than a QALY-maximisation trap.
That is to say: this post tells us, “look, specifically in the realm of health, there does seem to be ways of measuring things, but we actually care about measuring welfare”. I’d suggest we say instead: “look, specifically in the realm of health, there does seem to be ways of measuring things, but we might actually want to measure any given value we might care about”.
Agree—I mention that in brackets, and think it’s also good to clarify if you have the opportunity.