Sure, I think that makes sense if we see EA as just another preference like any other, I think if we were 100% certain there was no free will though it would greatly reduce the moral force of the argument supporting EA (and any decision-guiding framework), as I couldn’t reasonably tell someone or myself, ‘you ought to do X over and above Y’.
Sure, I think that makes sense if we see EA as just another preference like any other, I think if we were 100% certain there was no free will though it would greatly reduce the moral force of the argument supporting EA (and any decision-guiding framework), as I couldn’t reasonably tell someone or myself, ‘you ought to do X over and above Y’.