If skepticism about free will renders the EA endeavor void, then wouldn’t it also render any action-guiding principles void (including principles about what’s best to do out of self-interest)? In which case, it seems odd to single out its consequences for EA.
You sometimes see some (implicit) moving between “we did this good thing, but there’s a sense in which we can’t take credit, because it was determined before we chose to do it” to “we did this good thing, but there’s a sense in which we can’t take credit, because it would have happened whether or not we chose to do it”, where the latter can be untrue even if the former always true. The former doesn’t imply anything about what you should have done instead, while the latter does but has nothing to do with skepticism about free will. So even if determinism undermines certain kinds of “you ought to x” claims, it doesn’t imply “you ought to not bother doing x” — it does not justify resignation. There is a parallel (though maybe more problematic) discussion about what to do about the possibility of nihilism.
Anyway, even skeptics about free will can agree that ex post it was good that the good thing happened (compared to it not happening), and they can agree that certain choices were instrumental in it happening (if the choices weren’t made, it wouldn’t have happened). Looking forward, the skeptic could also understand “you ought to x” claims as saying “the world where you do x will be better than the world where you don’t, and I don’t have enough information to know which world we’re in”. They also don’t need to deny that people are and will continue to be sensitive to “ought” claims in the sense that explaining to people why they ought to do something can make them more likely to do it compared to the world where you don’t explain why. Basically, counterfactual talk can still make sense for determinists. And all this seems like more then enough for anything worth caring about — I don’t think any part of EA requires our choices to be undetermined or freely made in some especially deep way.
I think maybe this free will stuff does matter in a more practical way when it comes to prison reform and punishment, since (plausibly) support for ‘retributive’ punishment vs rehabilitation comes from attitudes about free will and responsbility that are either incoherent or wrong in a influencable way.
Thanks finm, I agree, EA is far from uniquely vulnerable to determinism, as you say all action-guiding principles would be affected, I was just contextualising to the forum.
Yes, I think that’s a useful distinction, Harris labels these ‘determinism’ and ‘fatalism’ respectively, and so still believes our decisions matter in the sense that they will impact the value of future world-states.
That could work to reformulate the meaning of ought statements, though I still feel something important is lost from ethics if determinism is true.
If skepticism about free will renders the EA endeavor void, then wouldn’t it also render any action-guiding principles void (including principles about what’s best to do out of self-interest)? In which case, it seems odd to single out its consequences for EA.
You sometimes see some (implicit) moving between “we did this good thing, but there’s a sense in which we can’t take credit, because it was determined before we chose to do it” to “we did this good thing, but there’s a sense in which we can’t take credit, because it would have happened whether or not we chose to do it”, where the latter can be untrue even if the former always true. The former doesn’t imply anything about what you should have done instead, while the latter does but has nothing to do with skepticism about free will. So even if determinism undermines certain kinds of “you ought to x” claims, it doesn’t imply “you ought to not bother doing x” — it does not justify resignation. There is a parallel (though maybe more problematic) discussion about what to do about the possibility of nihilism.
Anyway, even skeptics about free will can agree that ex post it was good that the good thing happened (compared to it not happening), and they can agree that certain choices were instrumental in it happening (if the choices weren’t made, it wouldn’t have happened). Looking forward, the skeptic could also understand “you ought to x” claims as saying “the world where you do x will be better than the world where you don’t, and I don’t have enough information to know which world we’re in”. They also don’t need to deny that people are and will continue to be sensitive to “ought” claims in the sense that explaining to people why they ought to do something can make them more likely to do it compared to the world where you don’t explain why. Basically, counterfactual talk can still make sense for determinists. And all this seems like more then enough for anything worth caring about — I don’t think any part of EA requires our choices to be undetermined or freely made in some especially deep way.
Some things you might be interested in reading —
Harry Frankfurt: Freedom of the will and the concept of a person
Harry Frankfurt: Alternate possibilities and moral responsibility
P. F. Strawson: Freedom and responsibility
Kathleen Vohs & Jonathan Schooler: The Value of Believing in Free Will: Encouraging a Belief in Determinism Increases Cheating
I think maybe this free will stuff does matter in a more practical way when it comes to prison reform and punishment, since (plausibly) support for ‘retributive’ punishment vs rehabilitation comes from attitudes about free will and responsbility that are either incoherent or wrong in a influencable way.
Thanks finm, I agree, EA is far from uniquely vulnerable to determinism, as you say all action-guiding principles would be affected, I was just contextualising to the forum.
Yes, I think that’s a useful distinction, Harris labels these ‘determinism’ and ‘fatalism’ respectively, and so still believes our decisions matter in the sense that they will impact the value of future world-states.
That could work to reformulate the meaning of ought statements, though I still feel something important is lost from ethics if determinism is true.
Will have a look at the resources :)