Generally I find it helps to separate cost from value (see The Value Of A Life) and also to point out that our decisions carry implicit value regardless of whether we articulate that value. By choosing to donate $5,000 to an art gallery rather than $5,000 to AMF to save a life (in expectation) then I am implicitly valuing the former over the latter. It helps to articulate what it is that we value to understand whether these decisions are consistent with our beliefs.
I do believe that I probably value more things than just the expected utility of sentient beings. However, I budget a significant portion of my time and money to optimising the expected utility of sentient beings while also carving out time and money to be spent on other things (including my own hedonism, aesthetic preferences, and special obligations to people in my life etc).
I agree, and that is essentially the rationale I employ. I personally think I could put a value on every aspect of my life, therefore subverting the notion that implicit values can’t be made explicit.
However, I think the problem is that for some people your answer will be a non-starter. They might not want to assign the implicit value an explicit value (and therefore your response would shew them away). So what I’m proposing is allowing them keep their implicit values implicit while showing them that you can still be an EA if you accept that other people have implicit values as well. In honesty, it’s barely a meta-ethical claim, and more-so an explication of how EA can jive with various ethical frameworks.
Generally I find it helps to separate cost from value (see The Value Of A Life) and also to point out that our decisions carry implicit value regardless of whether we articulate that value. By choosing to donate $5,000 to an art gallery rather than $5,000 to AMF to save a life (in expectation) then I am implicitly valuing the former over the latter. It helps to articulate what it is that we value to understand whether these decisions are consistent with our beliefs.
I do believe that I probably value more things than just the expected utility of sentient beings. However, I budget a significant portion of my time and money to optimising the expected utility of sentient beings while also carving out time and money to be spent on other things (including my own hedonism, aesthetic preferences, and special obligations to people in my life etc).
Also relevant is Julia’s post You Have More Than One Goal And That’s Fine.
I agree, and that is essentially the rationale I employ. I personally think I could put a value on every aspect of my life, therefore subverting the notion that implicit values can’t be made explicit.
However, I think the problem is that for some people your answer will be a non-starter. They might not want to assign the implicit value an explicit value (and therefore your response would shew them away). So what I’m proposing is allowing them keep their implicit values implicit while showing them that you can still be an EA if you accept that other people have implicit values as well. In honesty, it’s barely a meta-ethical claim, and more-so an explication of how EA can jive with various ethical frameworks.