These are good questions, and I think the answer generally is yes, we should be disposed to treating the future’s ethics as superior to our own, although we shouldn’t be unquestioning about this.
The place to start is simply to note that obvious fact that moral standards do shift all the time, often in quite radical ways. So at the very least we ought to assume a stance of skepticism toward any particular moral posture, as we have reason to believe that ethics in general are highly contingent, culture-bound, etc.
Then the question becomes whether we have reasons to favor some period’s moral stances over any others. There are a variety of reasons we might do so:
Knowledge has been increasing monotonically, and in recent years extremely rapidly. Much of this knowledge is scientific , technological, or involves other kinds of expertise, and such knowledge does have a moral valence. E.g., we do not believe in witches anymore.
Some of our increasing knowledge is historical and philosophical. The Catholic church did a lot of things in the middle ages that to me seem very bad but seemed to the church at the time morally justified. But I also have access to a lot of historical information about the middle ages, and I can situate the church’s actions in a broader story about politics, empire, religious conflict, etc., that undercuts the church’s moral claims. Other things being equal, we probably are wise to privilege later time periods over earlier time periods because later time periods saw how things turned out. Nazism seemed like a moral imperative to Nazis, but here in 2022, I know how WWII played out. (Spoiler alert: not well!)
The moral changes that have occurred over time are not random, and we can apply meta-ethics to them to try to understand how things have changed. We used to condone slavery and now we abhor it. Is that just happenstance, such that in some alternate history we used to abhor slavery (perhaps for religious reasons) and now embrace it (perhaps because of the logic of capitalism)? Probably not, because across the board the ethical trend has been an extension of rights, franchise, and dignity to widening circles of humans. So we can ask whether we think that is a good ethical trend and draw conclusions about the relative merits of different moral frameworks.
Wealth has also been increasingly more or less monotonically, and insofar as moral behavior might be considered a luxury good, we should suppose that it may be more abundant these days than in past. (This claim deserves a ton of scrutiny. I think it probably is true in some spheres—e.g., gender equality—and maybe less so in others.)
I want to stress that I don’t think these arguments are absolute proof of anything; they are simply reasons we should be disposed to privilege the broad moral leanings of the future over those of the past. Certainly I think over short time spans, many moral shifts are highly contingent and culture-bound. I also think that broad trends might mask a lot of smaller trends that could bounce around much more randomly. And it is absolutely possible that some long-term trends will be morally degrading. For example, I am also not at all sure that long-term technological trends are well-aligned with human flourishing.
It is very easy to imagine that future generations will hold moral positions that we find repugnant. Imagine, for example, that in the far future pregnancy is obsolete. The vast majority of human babies are gestated artificially, which people of the future find safer and more convenient than biological pregnancy. Imagine as a consequence of this that viable fetuses become much more abundant, and people of the future think nothing of raising multiple babies until they are say, three months old, selecting the “best” one based on its personality, sleeping habits, etc., and then painlessly euthanizing the others. Is this a plausible future scenario, or do meta-ethical trends suggest we shouldn’t be concerned about it? If we look into our crystal ball and discover that this is in fact what our ancestors get up to, should we conclude that in the future technological progress will degrade the value of human life in a way that is morally perverse? Or should we conclude instead that technological progress will undermine some of our present-day moral beliefs that aren’t as well-grounded as we think they are? I don’t have a definitive answer, but I would at least suggest that we should strongly consider the latter.
The negative reactions to this post are disheartening. I have a degree of affectionate fondness for the parodic levels of overthinking that characterize the EA community, but here you really see the downsides of that overthinking concretely.
Of course it is meaningful that Eliezer Yudkowsky has made a bunch of terrible predictions in the past that closely echo predictions he continues to make in slightly different form today. Of course it is relevant that he has neither owned up to those earlier terrible predictions or explained how he has learned from those mistakes. Of course we should be more skeptical of similar claims he makes in the future. Of course we should pay more attention to broader consensus or aggregate predictions in the field than in outlier predictions.
This is sensible advice in any complex domain, and saying that we should “evaluate every argument in isolation on its merits” is a type of special pleading or sophistry. Sometimes (often!) the obvious conclusions are the correct ones: even extraordinarily clever people are often wrong; extreme claims that other knowledgeable experts disagree with are often wrong; and people who make extreme claims that prove to be wrong should be strongly discounted when they make further extreme claims.
None of this is to suggest in any what that Yudkowsky should be ignored, or even is necessarily wrong. But if you yourself are not an expert in AI (as most of us aren’t), his past bad predictions are highly relevant indicators when assessing his current predictions.