Thanks for this comment, you raise a number of important points. I agree with everything you’ve written about QALYs and DALYs. We decided to frame this in terms of DALYs for simplicity and familiarity. This was probably just a bit confusing though, especially as we wanted to consider values of well-being (much) less than 0 and, in principle, greater than 1. So maybe a generic unit of hedonistic well-being would have been better. I think you’re right that this doesn’t matter a huge amount because we’re uncertain over many orders of magnitude for other variables, such as the moral weight of chickens.
The trade-off problem is really tricky. I share your scepticism about people’s actual preferences tracking hedonistic value. We just took it for granted that there is a single, privileged way to make such trade-offs but I agree that it’s far from obvious that this is true. I had in mind something like “a given experience has well-being −1 if an idealised agent/an agent with the experiencer’s idealised preferences would be indifferent between non-existence and a life consisting of that experience as well as an experience of well-being 1”. There are a number of problems with this conception, including the issue that there might not be a single idealised set of preferences for these trade-offs, as you suggest. I think we needed to make some kind of assumption like this to get this project off the ground but I’d be really interested to hear thoughts/see future discussion on this topic!
Thanks for this comment, you raise a number of important points. I agree with everything you’ve written about QALYs and DALYs. We decided to frame this in terms of DALYs for simplicity and familiarity. This was probably just a bit confusing though, especially as we wanted to consider values of well-being (much) less than 0 and, in principle, greater than 1. So maybe a generic unit of hedonistic well-being would have been better. I think you’re right that this doesn’t matter a huge amount because we’re uncertain over many orders of magnitude for other variables, such as the moral weight of chickens.
The trade-off problem is really tricky. I share your scepticism about people’s actual preferences tracking hedonistic value. We just took it for granted that there is a single, privileged way to make such trade-offs but I agree that it’s far from obvious that this is true. I had in mind something like “a given experience has well-being −1 if an idealised agent/an agent with the experiencer’s idealised preferences would be indifferent between non-existence and a life consisting of that experience as well as an experience of well-being 1”. There are a number of problems with this conception, including the issue that there might not be a single idealised set of preferences for these trade-offs, as you suggest. I think we needed to make some kind of assumption like this to get this project off the ground but I’d be really interested to hear thoughts/see future discussion on this topic!