If free will doesn’t exist, does that ruin/render void the EA endeavour?
Can you say more about why free will not existing is relevant to morality?
My personal take is that free will seems like a pretty meaningless and confused concept, and probably doesn’t exist (whatever that means). But that I want to do what I can to make the world a better place anyway, in the same way that I clearly want and value things in my normal life, regardless of whether I’m doing this with free will.
Sure, I think that makes sense if we see EA as just another preference like any other, I think if we were 100% certain there was no free will though it would greatly reduce the moral force of the argument supporting EA (and any decision-guiding framework), as I couldn’t reasonably tell someone or myself, ‘you ought to do X over and above Y’.
As a strong free will sceptic I agree that you can never reasonably tell someone “you ought to do X over and above Y”.
However, it makes complete sense to me in a purely deterministic world to make one small addition to the phrase:
“you ought to do X over and above Y in order to achieve Z”. The ought has no meaning without the Z, with the Z representing the ideal world you are deterministically programmed to want to live in.
Thanks for the comment (and welcome to the Forum! :) ). Yeah using conditional oughts seems like a pretty reasonable approach to me, though of course has some convenience cost when the Z is very widely shared (‘you ought to fix your brakes over drive without brakes in order to not crash’) so can perhaps then be implied.
Can you say more about why free will not existing is relevant to morality?
My personal take is that free will seems like a pretty meaningless and confused concept, and probably doesn’t exist (whatever that means). But that I want to do what I can to make the world a better place anyway, in the same way that I clearly want and value things in my normal life, regardless of whether I’m doing this with free will.
Sure, I think that makes sense if we see EA as just another preference like any other, I think if we were 100% certain there was no free will though it would greatly reduce the moral force of the argument supporting EA (and any decision-guiding framework), as I couldn’t reasonably tell someone or myself, ‘you ought to do X over and above Y’.
As a strong free will sceptic I agree that you can never reasonably tell someone “you ought to do X over and above Y”.
However, it makes complete sense to me in a purely deterministic world to make one small addition to the phrase: “you ought to do X over and above Y in order to achieve Z”. The ought has no meaning without the Z, with the Z representing the ideal world you are deterministically programmed to want to live in.
Thanks for the comment (and welcome to the Forum! :) ). Yeah using conditional oughts seems like a pretty reasonable approach to me, though of course has some convenience cost when the Z is very widely shared (‘you ought to fix your brakes over drive without brakes in order to not crash’) so can perhaps then be implied.