I’m puzzled by the aspiration that our ethical system should be ‘future-proofed’. Why, exactly, should we care about what future people will think of us? How far in the future should we care about, anyway? Conversely, shouldn’t we also care that past people would have judged us? Should we care if current people do judge us? How are we to weigh these considerations? If we knew that the world was about to be taken over by some immortal totalitarian regime, we would future proof our views by just adopting those beliefs now. Does knowing that this would happen give us any reason to change our views?
Presumably, the underlying thought is that future people will have superior ethical views—that’s what matters, not the fact in itself that future people have them (Cf Plato’s Euthypro dilemma: do the gods love things because they are good or are they good because the gods love them?). And the reason we believe that is because we think there’s been ‘moral progress’, that is, we have superior views to our forebears. But to say our views are superior because and only because they are (say) more utilitarian, sentientist, etc. is just to assert that one thinks those beliefs are true; it’s not an argument for those views. Someone who held other views might think we are experiencing moral decay.
Given all this, I prefer the task of engaging with the object-level ethical arguments, doing our best to work out what the right principles are, then taking action. It feels disempowering and ‘spooky’ to say “future people are going to be much better at ethics for reasons we would not or cannot understand; so let’s try to figure out what they would do and do that, even if it makes no sense to us”.
I didn’t read the goal here as literally to score points with future people, though I agree that the post is phrased such that it is implied that future ethical views will be superior.
Rather, I think the aim is to construct a framework that can be applied consistently across time—avoiding the pitfalls of common-sense morality both past and future.
In other words, this could alternatively be framed as ‘backtesting ethics’ or something, but ‘future-proofing’ speaks to (a) concern about repeating past mistakes (b) personal regret in future.
I think I agree with Tyler. Also see this follow-up piece—“future-proof” is supposed to mean “would still look good if we made progress, whatever that is.” This is largely supposed to be a somewhat moral-realism-agnostic operationalization of what it means for object-level arguments to be right.
I’m puzzled by the aspiration that our ethical system should be ‘future-proofed’. Why, exactly, should we care about what future people will think of us? How far in the future should we care about, anyway? Conversely, shouldn’t we also care that past people would have judged us? Should we care if current people do judge us? How are we to weigh these considerations? If we knew that the world was about to be taken over by some immortal totalitarian regime, we would future proof our views by just adopting those beliefs now. Does knowing that this would happen give us any reason to change our views?
Presumably, the underlying thought is that future people will have superior ethical views—that’s what matters, not the fact in itself that future people have them (Cf Plato’s Euthypro dilemma: do the gods love things because they are good or are they good because the gods love them?). And the reason we believe that is because we think there’s been ‘moral progress’, that is, we have superior views to our forebears. But to say our views are superior because and only because they are (say) more utilitarian, sentientist, etc. is just to assert that one thinks those beliefs are true; it’s not an argument for those views. Someone who held other views might think we are experiencing moral decay.
Given all this, I prefer the task of engaging with the object-level ethical arguments, doing our best to work out what the right principles are, then taking action. It feels disempowering and ‘spooky’ to say “future people are going to be much better at ethics for reasons we would not or cannot understand; so let’s try to figure out what they would do and do that, even if it makes no sense to us”.
I didn’t read the goal here as literally to score points with future people, though I agree that the post is phrased such that it is implied that future ethical views will be superior.
Rather, I think the aim is to construct a framework that can be applied consistently across time—avoiding the pitfalls of common-sense morality both past and future.
In other words, this could alternatively be framed as ‘backtesting ethics’ or something, but ‘future-proofing’ speaks to (a) concern about repeating past mistakes (b) personal regret in future.