I didn’t read the goal here as literally to score points with future people, though I agree that the post is phrased such that it is implied that future ethical views will be superior.
Rather, I think the aim is to construct a framework that can be applied consistently across time—avoiding the pitfalls of common-sense morality both past and future.
In other words, this could alternatively be framed as ‘backtesting ethics’ or something, but ‘future-proofing’ speaks to (a) concern about repeating past mistakes (b) personal regret in future.
I think I agree with Tyler. Also see this follow-up piece—“future-proof” is supposed to mean “would still look good if we made progress, whatever that is.” This is largely supposed to be a somewhat moral-realism-agnostic operationalization of what it means for object-level arguments to be right.
I didn’t read the goal here as literally to score points with future people, though I agree that the post is phrased such that it is implied that future ethical views will be superior.
Rather, I think the aim is to construct a framework that can be applied consistently across time—avoiding the pitfalls of common-sense morality both past and future.
In other words, this could alternatively be framed as ‘backtesting ethics’ or something, but ‘future-proofing’ speaks to (a) concern about repeating past mistakes (b) personal regret in future.