Some people think we’re entirely clueless, so that we haven’t the faintest clue about which actions will benefit the far future. I disagree with this position for reasons Richard Y Chappell has explained very persuasively. It would be awfully convenient if after learning that the far future has nearly all the expected value in the world, it turned out that this had no significant normative implications.
What do you think about the argument for cluelessness from rejecting precise expected values in the first place (which I partly argue for here)?
What do you think about the argument for cluelessness from rejecting precise expected values in the first place (which I partly argue for here)?