I think newer generations will tend to grow up with better views than older ones (although older generations could have better views than younger ones at any moment, because they’re more informed), since younger generations can inspect and question the views of their elders, alternative views, and the reasons for and against with less bias and attachment to them.
This view assumes that moral progress is a real thing, rather than just an illusion. I can personally understand this of view if the younger generations shared the same terminal values, and merely refined instrumental values or became better at discovering logical consistencies or something. However, it also seems likely that moral progress can be described as moral drift.
Personally, I’m a moral anti-realist. Morals are more like preferences and desires than science. Each generation has preferences, and the next generation has slightly different preferences. When you put it that way, the idea of fundamentally better preferences doesn’t quite make sense to me.
More concretely, we could imagine several ways that future generations disagree with us (and I’m assuming a suffering reduction perspective here, as I have identified you as among that crowd):
Future generations could see more value in deep ecology and preserving nature.
They could see more value in making nature simulations.
They could see less value in ensuring that robots have legally protected rights, since that’s a staple of early 21st century fiction and future generations who grew up with robot servants might not really see it as valuable.
I’m not trying to say that these are particularly likely things, but it would seem strange to put full faith in a consistent direction of moral progress, when nearly every generation before us has experienced the opposite ie. take any generation from prior centuries and they would hate what we value these days. The same will probably be true for you too.
I’m a moral anti-realist, too. You can still be a moral anti-realist and believe that your own ethical views have improved in some sense, although I suppose you’ll never believe that they’re worse now than before, since you wouldn’t hold them if that were the case. Some think of it as what you would endorse if you were less biased, had more information and reflected more. I think my views are better now because they’re more informed, but it’s a possibility that I could have been so biased in dealing with new information that my views are in fact worse now than before.
In the same way, I think the views of future generations can end up better than my views will ever be.
More concretely, we could imagine several ways that future generations disagree with us (and I’m assuming a suffering reduction perspective here, as I have identified you as among that crowd):
So I don’t expect such views to be very common over the very long-term (unless there are more obstacles to having different views in the future), because I can’t imagine there being good (non-arbitrary) reasons for those views (except the 2nd, and also the 3rd if future robots turn out to not be conscious) and there are good reasons against them. However, this could, in principle, turn out to be wrong, and an idealized version of myself might have to endorse these views or at least give them more weight.
I think where idealized versions of myself and idealized versions of future generations will disagree is due to different weights given to opposing reasons, since there is no objective way to weight them. My own weights may be “biases” determined by my earlier experiences with ethics, other life experiences, genetic predisposition, etc., and maybe some weights could be more objective than others based on how they were produced, but without this history, no weights can be more objective than others.
Finally, just in practice, I think my views are more aligned with those of younger generations and generations to come, so views more similar to my own will be relatively more prominent if we don’t cure aging (soon), which is a reason against curing aging (soon), at least for me.
I’m a moral anti-realist, too. You can still be a moral anti-realist and believe that your own ethical views have improved in some sense
Sure. There are a number of versions of moral anti-realism. It makes sense for some people to think that moral progress is a real thing. My own version of ethics says that morality doesn’t run that deep and that personal preferences are pretty arbitrary (though I do agree with some reflection).
In the same way, I think the views of future generations can end up better than my views will ever be.
Again, that makes sense. I personally don’t really share the same optimism as you.
So I don’t expect such views to be very common over the very long-term
One of the frameworks I propose in my essay that I’m writing is the perspective of value fragility. Across many independent axes, there are many more ways that your values can get worse than better. This is clear in the case of giving an artificial intelligence some utility function, but it could also (more weakly) be the case in deferring to future generations.
You point to idealized values. My hypothesis is that allowing everyone who currently lives to die and putting future generations in control is not a reliable idealization process. There are many ways that I am OK with deferring my values to someone else, but I don’t really understand how generational death is one of those.
By contrast, there are a multitude of human biases that make people have more rosy views about future generations than seems (to me) warranted by the evidence:
Status quo bias. People dying and leaving stuff to the next generations has been the natural process for millions of years. Why should we stop it now?
The relative values fallacy. This goes something like, “We can see that the historical trend is for values to get more normal over time. Each generation has gotten more like us. Therefore future generations will be even more like us, and they’ll care about all the things I care about.”
Failure to appreciate diversity of future outcomes. Robin Hanson talks about how people use a far-view when talking about the future, which means that they ignore small details and tend to focus on one really broad abstract element that they expect to show up. In practice this means that people will assume that because future generations will likely share our values across one axis (in your case, care for farm animals) that they will also share our values across all axes.
Belief in the moral arc of the universe. Moral arcs play a large role in human psychology. Religions display them prominently in the idea of apocalypses where evil is defeated in the end. Philosophers have believed in a moral arc, and since many of the supposed moral arcs contradict each other, it’s probably not a real thing. This is related to the just-world fallacy where you imagine how awful it would be that future generations could actually be so horrible, so you just sort of pretend that bad outcomes aren’t possible.
I personally think that the moral circle expansion hypothesis is highly important as a counterargument, and I want more people to study this. I am very worried that people assume that moral progress will just happen automatically, almost like a spiritual force, because well… the biases I gave above.
Finally, just in practice, I think my views are more aligned with those of younger generations and generations to come
This makes sense if you are referring to the current generation, but I don’t see how you can possibly be aligned with future generations that don’t exist yet?
One of the frameworks I propose in my essay that I’m writing is the perspective of value fragility. Across any individual axis, there are many more ways that your values can get worse than better.
There are more ways, yes, but I think they’re individually much less likely than the ways in which they can get better, assuming they’re somewhat guided by reflection and reason. This might still hold once you aggregate all the ways they can get worse and separate all the ways they can get better, but I’m not sure.
You point to idealized values. My hypothesis is that allowing everyone who currently lives to die and putting future generations in control is not a reliable idealization process.
This makes sense if you are referring to the current generation, but I don’t see how you can possibly be aligned with future generations that don’t exist yet?
I expect future generations, compared to people alive today, to be less religious, less speciesist, less prejudiced generally, more impartial, more consequentialist and more welfarist, because of my take on the relative persuasiveness of these views (and the removal of psychological obstacles to having these views), which I think partially explains the trends. No guarantee, of course, and there might be alternatives to these views that don’t exist today but are even more persuasive, but maybe I should be persuaded by them, too.
I don’t expect them to be more suffering-focused (beyond what’s implied by the expectations above), though. Actually, if current EA views become very influential on future views, I might expect those in the future to be less suffering-focused and to cause s-risks, which is concerning to me. I think the asymmetry is relatively more common among people today than it is among EAs, specifically.
There are more ways, yes, but I think they’re individually much less likely than the ways in which they can get better, assuming they’re somewhat guided by reflection and reason.
Again, I seem to have different views about to what extent moral views are driven by reflection and reason. For example, is the recent trend towards Trumpian populism driven by reflection and reason? (If you think this is not a new trend, then I ask you to point to previous politicians who share the values of the current administration).
I expect future generations, compared to people alive today, to be less religious
I agree with that.
less speciesist
This is also likely. However, I’m very worried about the idea that caring about farm animals doesn’t imply an anti-speciesist mindset. Most vegans aren’t concerned about wild animal suffering, and the primary justification that most vegans give for their veganism is from an exploitation framework (or environmentalist one) rather than a harm-reduction framework. This might not robustly transfer to future sentience.
less prejudiced generally, more impartial
This isn’t clear to me. From this BBC article, “Psychologists used to believe that greater prejudice among older adults was due to the fact that older people grew up in less egalitarian times. In contrast to this view, we have gathered evidence that normal changes [ie. aging] to the brain in late adulthood can lead to greater prejudice among older adults.” Furthermore, “prejudice” is pretty vague, and I think there are many ways that young people are prejudiced without even realizing it (though of course this applies to old people too).
I don’t really see why we should expect this personally. Could you point to some trends that show that humans have become more consequentialist over time? I tend to think that Hansonian moral drives are really hard to overcome.
because of my take on the relative persuasiveness of these views (and the removal of psychological obstacles to having these views)
The second reason is a good one (I agree that when people stop eating meat they’ll care more about animals). The relative persuasiveness thing seems weak to me because I have a ton of moral views that I think are persuasive and yet don’t seem to be adopted by the general population. Why would we expect this to change?
I don’t expect them to be more suffering-focused (beyond what’s implied by the expectations above), though. Actually, if current EA views become very influential on future views, I might expect those in the future to be less suffering-focused and to cause s-risks, which is concerning to me.
It sounds like you are not as optimistic as I thought you were. Out of all the arguments you gave, I think the argument from moral circle expansion is the most convincing. I’m less sold on the idea that moral progress is driven by reason and reflection.
I also have a strong prior against positive moral progress relative to any individual parochial moral view given what looks like positive historical evidence against that view (the communists of the early 20th century probably thought that everyone would adopt their perspective by now; same for Hitler, alcohol prohibitionists, and many other movements).
Overall, I think there are no easy answers here and I could easily be wrong.
Again, I seem to have different views about to what extent moral views are driven by reflection and reason. For example, is the recent trend towards Trumpian populism driven by reflection and reason? (If you think this is not a new trend, then I ask you to point to previous politicians who share the values of the current administration).
(...)
The relative persuasiveness thing seems weak to me because I have a ton of moral views that I think are persuasive and yet don’t seem to be adopted by the general population. Why would we expect this to change?
I don’t really have a firm idea of the extent reflection and reason drives changes in or the formation of beliefs, I just think they have some effect. They might have disproportionate effects in a motivated minority of people who become very influential, but not necessarily primarily through advocacy. I think that’s a good description of EA, actually. In particular, if EAs increase the development and adoption of plant-based and cultured animal products, people will become less speciesist because we’re removing psychological barriers for them, and EAs are driven by reflection and reason, so these changes are in part indirectly driven by reflection and reason. Public intellectuals and experts in government can have influence, too.
Could the relatively pro-trade and pro-migration views of economists, based in part on reflection and reason, have led to more trade and migration, and caused us to be less xenophobic?
Minimally, I’ll claim that, all else equal, if the reasons for one position are better than the reasons for another (and especially if there are good reasons for the first and none of the other), then the first position should gain more support in expectation.
I don’t think short-term trends can usually be explained by reflection and reason, and I don’t think Trumpian populism is caused by reflection and reason, but I think the general trend throughout history is away from such tribalistic views, and I think that there are basically no good reasons for tribalism might play a part, although not necessarily a big one.
This isn’t clear to me. From this BBC article, “Psychologists used to believe that greater prejudice among older adults was due to the fact that older people grew up in less egalitarian times. In contrast to this view, we have gathered evidence that normal changes [ie. aging] to the brain in late adulthood can lead to greater prejudice among older adults.”
That’s a good point. However, is this only in social interactions (which, of course, can reinforce prejudice in those who would act on it in other ways)? What about when they vote?
We’re talking maybe 20 years of prejudice inhibition lost at most on average, so at worst about a third of adults at any moment, but also a faster growing proportion of people growing up without any given prejudice they’d need to inhibit in the first place vs many extra people biased towards views they had possibly hundreds of years ago. The average age in both cases should trend towards half the life expectancy, assuming replacement birth rates.
I don’t really see why we should expect this personally. Could you point to some trends that show that humans have become more consequentialist over time? I tend to think that Hansonian moral drives are really hard to overcome.
This judgement was more based on the arguments, not trends. That being said, I think social liberalism and social democracy are more welfarist, flexible, pragmatic and outcome-focused than most political views, and I think there’s been a long-term trend towards them. Those further left are more concerned with exploitation and positive rights despite the consequences, and those further right are more concerned with responsibility, merit, property rights and rights to discriminate. Some of this might be driven by deference to experts and the views of economists, who seem more outcome-focused. This isn’t something I’ve thought a lot about, though.
Maybe communists were more consequentialist (I don’t know), but if they had been right empirically about the consequences, communism might be the norm today instead.
However, I’m very worried about the idea that caring about farm animals doesn’t imply an anti-speciesist mindset. Most vegans aren’t concerned about wild animal suffering, and the primary justification that most vegans give for their veganism is from an exploitation framework rather than a harm-reduction framework. This might not robustly transfer to future sentience.
I actually haven’t gotten a strong impression that most ethical vegans are primarily concerned with exploitation rather than cruelty specifically, but they are probably primarily concerned with harms humans cause, rather than just harms generally that could be prevented. It doesn’t imply antispeciesism or a transfer to future sentience, but I think it helps more than it hurts in expectation. In particular, I think it’s very unlikely we’ll care much about wild animals or future sentience that’s no more intelligent than nonhuman animals if we wouldn’t care more about farmed animals, so at least one psychological barrier is removed.
This view assumes that moral progress is a real thing, rather than just an illusion. I can personally understand this of view if the younger generations shared the same terminal values, and merely refined instrumental values or became better at discovering logical consistencies or something. However, it also seems likely that moral progress can be described as moral drift.
Personally, I’m a moral anti-realist. Morals are more like preferences and desires than science. Each generation has preferences, and the next generation has slightly different preferences. When you put it that way, the idea of fundamentally better preferences doesn’t quite make sense to me.
More concretely, we could imagine several ways that future generations disagree with us (and I’m assuming a suffering reduction perspective here, as I have identified you as among that crowd):
Future generations could see more value in deep ecology and preserving nature.
They could see more value in making nature simulations.
They could see less value in ensuring that robots have legally protected rights, since that’s a staple of early 21st century fiction and future generations who grew up with robot servants might not really see it as valuable.
I’m not trying to say that these are particularly likely things, but it would seem strange to put full faith in a consistent direction of moral progress, when nearly every generation before us has experienced the opposite ie. take any generation from prior centuries and they would hate what we value these days. The same will probably be true for you too.
I’m a moral anti-realist, too. You can still be a moral anti-realist and believe that your own ethical views have improved in some sense, although I suppose you’ll never believe that they’re worse now than before, since you wouldn’t hold them if that were the case. Some think of it as what you would endorse if you were less biased, had more information and reflected more. I think my views are better now because they’re more informed, but it’s a possibility that I could have been so biased in dealing with new information that my views are in fact worse now than before.
In the same way, I think the views of future generations can end up better than my views will ever be.
So I don’t expect such views to be very common over the very long-term (unless there are more obstacles to having different views in the future), because I can’t imagine there being good (non-arbitrary) reasons for those views (except the 2nd, and also the 3rd if future robots turn out to not be conscious) and there are good reasons against them. However, this could, in principle, turn out to be wrong, and an idealized version of myself might have to endorse these views or at least give them more weight.
I think where idealized versions of myself and idealized versions of future generations will disagree is due to different weights given to opposing reasons, since there is no objective way to weight them. My own weights may be “biases” determined by my earlier experiences with ethics, other life experiences, genetic predisposition, etc., and maybe some weights could be more objective than others based on how they were produced, but without this history, no weights can be more objective than others.
Finally, just in practice, I think my views are more aligned with those of younger generations and generations to come, so views more similar to my own will be relatively more prominent if we don’t cure aging (soon), which is a reason against curing aging (soon), at least for me.
Sure. There are a number of versions of moral anti-realism. It makes sense for some people to think that moral progress is a real thing. My own version of ethics says that morality doesn’t run that deep and that personal preferences are pretty arbitrary (though I do agree with some reflection).
Again, that makes sense. I personally don’t really share the same optimism as you.
One of the frameworks I propose in my essay that I’m writing is the perspective of value fragility. Across many independent axes, there are many more ways that your values can get worse than better. This is clear in the case of giving an artificial intelligence some utility function, but it could also (more weakly) be the case in deferring to future generations.
You point to idealized values. My hypothesis is that allowing everyone who currently lives to die and putting future generations in control is not a reliable idealization process. There are many ways that I am OK with deferring my values to someone else, but I don’t really understand how generational death is one of those.
By contrast, there are a multitude of human biases that make people have more rosy views about future generations than seems (to me) warranted by the evidence:
Status quo bias. People dying and leaving stuff to the next generations has been the natural process for millions of years. Why should we stop it now?
The relative values fallacy. This goes something like, “We can see that the historical trend is for values to get more normal over time. Each generation has gotten more like us. Therefore future generations will be even more like us, and they’ll care about all the things I care about.”
Failure to appreciate diversity of future outcomes. Robin Hanson talks about how people use a far-view when talking about the future, which means that they ignore small details and tend to focus on one really broad abstract element that they expect to show up. In practice this means that people will assume that because future generations will likely share our values across one axis (in your case, care for farm animals) that they will also share our values across all axes.
Belief in the moral arc of the universe. Moral arcs play a large role in human psychology. Religions display them prominently in the idea of apocalypses where evil is defeated in the end. Philosophers have believed in a moral arc, and since many of the supposed moral arcs contradict each other, it’s probably not a real thing. This is related to the just-world fallacy where you imagine how awful it would be that future generations could actually be so horrible, so you just sort of pretend that bad outcomes aren’t possible.
I personally think that the moral circle expansion hypothesis is highly important as a counterargument, and I want more people to study this. I am very worried that people assume that moral progress will just happen automatically, almost like a spiritual force, because well… the biases I gave above.
This makes sense if you are referring to the current generation, but I don’t see how you can possibly be aligned with future generations that don’t exist yet?
There are more ways, yes, but I think they’re individually much less likely than the ways in which they can get better, assuming they’re somewhat guided by reflection and reason. This might still hold once you aggregate all the ways they can get worse and separate all the ways they can get better, but I’m not sure.
I expect future generations, compared to people alive today, to be less religious, less speciesist, less prejudiced generally, more impartial, more consequentialist and more welfarist, because of my take on the relative persuasiveness of these views (and the removal of psychological obstacles to having these views), which I think partially explains the trends. No guarantee, of course, and there might be alternatives to these views that don’t exist today but are even more persuasive, but maybe I should be persuaded by them, too.
I don’t expect them to be more suffering-focused (beyond what’s implied by the expectations above), though. Actually, if current EA views become very influential on future views, I might expect those in the future to be less suffering-focused and to cause s-risks, which is concerning to me. I think the asymmetry is relatively more common among people today than it is among EAs, specifically.
Again, I seem to have different views about to what extent moral views are driven by reflection and reason. For example, is the recent trend towards Trumpian populism driven by reflection and reason? (If you think this is not a new trend, then I ask you to point to previous politicians who share the values of the current administration).
I agree with that.
This is also likely. However, I’m very worried about the idea that caring about farm animals doesn’t imply an anti-speciesist mindset. Most vegans aren’t concerned about wild animal suffering, and the primary justification that most vegans give for their veganism is from an exploitation framework (or environmentalist one) rather than a harm-reduction framework. This might not robustly transfer to future sentience.
This isn’t clear to me. From this BBC article, “Psychologists used to believe that greater prejudice among older adults was due to the fact that older people grew up in less egalitarian times. In contrast to this view, we have gathered evidence that normal changes [ie. aging] to the brain in late adulthood can lead to greater prejudice among older adults.” Furthermore, “prejudice” is pretty vague, and I think there are many ways that young people are prejudiced without even realizing it (though of course this applies to old people too).
I don’t really see why we should expect this personally. Could you point to some trends that show that humans have become more consequentialist over time? I tend to think that Hansonian moral drives are really hard to overcome.
The second reason is a good one (I agree that when people stop eating meat they’ll care more about animals). The relative persuasiveness thing seems weak to me because I have a ton of moral views that I think are persuasive and yet don’t seem to be adopted by the general population. Why would we expect this to change?
It sounds like you are not as optimistic as I thought you were. Out of all the arguments you gave, I think the argument from moral circle expansion is the most convincing. I’m less sold on the idea that moral progress is driven by reason and reflection.
I also have a strong prior against positive moral progress relative to any individual parochial moral view given what looks like positive historical evidence against that view (the communists of the early 20th century probably thought that everyone would adopt their perspective by now; same for Hitler, alcohol prohibitionists, and many other movements).
Overall, I think there are no easy answers here and I could easily be wrong.
I don’t really have a firm idea of the extent reflection and reason drives changes in or the formation of beliefs, I just think they have some effect. They might have disproportionate effects in a motivated minority of people who become very influential, but not necessarily primarily through advocacy. I think that’s a good description of EA, actually. In particular, if EAs increase the development and adoption of plant-based and cultured animal products, people will become less speciesist because we’re removing psychological barriers for them, and EAs are driven by reflection and reason, so these changes are in part indirectly driven by reflection and reason. Public intellectuals and experts in government can have influence, too.
Could the relatively pro-trade and pro-migration views of economists, based in part on reflection and reason, have led to more trade and migration, and caused us to be less xenophobic?
Minimally, I’ll claim that, all else equal, if the reasons for one position are better than the reasons for another (and especially if there are good reasons for the first and none of the other), then the first position should gain more support in expectation.
I don’t think short-term trends can usually be explained by reflection and reason, and I don’t think Trumpian populism is caused by reflection and reason, but I think the general trend throughout history is away from such tribalistic views, and I think that there are basically no good reasons for tribalism might play a part, although not necessarily a big one.
That’s a good point. However, is this only in social interactions (which, of course, can reinforce prejudice in those who would act on it in other ways)? What about when they vote?
We’re talking maybe 20 years of prejudice inhibition lost at most on average, so at worst about a third of adults at any moment, but also a faster growing proportion of people growing up without any given prejudice they’d need to inhibit in the first place vs many extra people biased towards views they had possibly hundreds of years ago. The average age in both cases should trend towards half the life expectancy, assuming replacement birth rates.
This judgement was more based on the arguments, not trends. That being said, I think social liberalism and social democracy are more welfarist, flexible, pragmatic and outcome-focused than most political views, and I think there’s been a long-term trend towards them. Those further left are more concerned with exploitation and positive rights despite the consequences, and those further right are more concerned with responsibility, merit, property rights and rights to discriminate. Some of this might be driven by deference to experts and the views of economists, who seem more outcome-focused. This isn’t something I’ve thought a lot about, though.
Maybe communists were more consequentialist (I don’t know), but if they had been right empirically about the consequences, communism might be the norm today instead.
I actually haven’t gotten a strong impression that most ethical vegans are primarily concerned with exploitation rather than cruelty specifically, but they are probably primarily concerned with harms humans cause, rather than just harms generally that could be prevented. It doesn’t imply antispeciesism or a transfer to future sentience, but I think it helps more than it hurts in expectation. In particular, I think it’s very unlikely we’ll care much about wild animals or future sentience that’s no more intelligent than nonhuman animals if we wouldn’t care more about farmed animals, so at least one psychological barrier is removed.