An incredibly mainstream view is to care about everyone alive today and everyone who will be born in the next 100 years. I have to imagine over 90% of people in the world would agree to that view or a view very close to that if you asked them.
I think we have an empirical disagreement here. If I felt strongly motivated to try to persuade you about this, I would go try to find studies about it; I suspect we may not even have 90% agreement on âeveryone alive today is worthy of moral concernâ, and I would strongly guess we donât have that level of agreement on caring about people who will be born 50 years from now. (Although I would also guess that many people just donât think about this kind of question very much and arenât guaranteed to have very clear or consistent answers.)
Even if people agreed with the premises, we could try to justify longtermism as arguing that the consequences of this belief are underexplored, though I hear you that you donât see a lot of neglected consequences.
At this point, though, Iâm not actually that invested in trying to champion longtermism specifically, so Iâm not the right person to defend it to you here. Letâs fix x-risk and check in about it after that :)
I havenât looked at any surveys, but it seems universal to care about future generations. This doesnât mean people will necessarily act in a way that protects future generationsâ interests â doesnât mean they wonât pollute or deforest, for example â but the idea is not controversial and is widely accepted.
Similarly, I think itâs basically universal to believe that all humans, in principle, have some value and have certain rights that should not be violated, but then, in practice, factors like racism, xenophobia, hatred based on religious fundamentalism, anti-LGBT hatred, etc. lead many people to dehumanize certain humans. There is typically an attempt to morally justify this, though, for example through appeals to âself-defenseâ (or similar concepts).
If you apply strict standards to the belief that everyone alive today is worthy of moral concern, then some self-identified effective altruists would fail the test, since they hold dehumanizing views about Black people, LGBT people, women, etc.
Thatâs getting into a different point than I was trying to make in the chunk of text you quoted. Which is just that Will MacAskill didnât fall out of a coconut tree and come up with the idea that future generations matter yesterday. His university, Oxford, is over 900 years old. I believe in his longtermism book he cites the Iroquois principle of making decisions while considering how they will affect the next seven generations. Historically, many (most?) families on Earth have had close relationships between grandparents and grandchildren. Passing down tradition and transmitting culture (e.g., stories, rituals, moral principles) over long timescales is considered important in many cultures and religions.
There is a risk of a sort of plagiarism with this kind of discourse where people take ideas that have existed for centuries or millennia across many parts of the world and then package them as if they are novel, without adequately acknowledging the history of the ideas. Thatâs like the effective altruistâs or the ethical theoristâs version of ânot invented hereâ.
I think we have an empirical disagreement here. If I felt strongly motivated to try to persuade you about this, I would go try to find studies about it; I suspect we may not even have 90% agreement on âeveryone alive today is worthy of moral concernâ, and I would strongly guess we donât have that level of agreement on caring about people who will be born 50 years from now. (Although I would also guess that many people just donât think about this kind of question very much and arenât guaranteed to have very clear or consistent answers.)
Even if people agreed with the premises, we could try to justify longtermism as arguing that the consequences of this belief are underexplored, though I hear you that you donât see a lot of neglected consequences.
At this point, though, Iâm not actually that invested in trying to champion longtermism specifically, so Iâm not the right person to defend it to you here. Letâs fix x-risk and check in about it after that :)
I havenât looked at any surveys, but it seems universal to care about future generations. This doesnât mean people will necessarily act in a way that protects future generationsâ interests â doesnât mean they wonât pollute or deforest, for example â but the idea is not controversial and is widely accepted.
Similarly, I think itâs basically universal to believe that all humans, in principle, have some value and have certain rights that should not be violated, but then, in practice, factors like racism, xenophobia, hatred based on religious fundamentalism, anti-LGBT hatred, etc. lead many people to dehumanize certain humans. There is typically an attempt to morally justify this, though, for example through appeals to âself-defenseâ (or similar concepts).
If you apply strict standards to the belief that everyone alive today is worthy of moral concern, then some self-identified effective altruists would fail the test, since they hold dehumanizing views about Black people, LGBT people, women, etc.
Thatâs getting into a different point than I was trying to make in the chunk of text you quoted. Which is just that Will MacAskill didnât fall out of a coconut tree and come up with the idea that future generations matter yesterday. His university, Oxford, is over 900 years old. I believe in his longtermism book he cites the Iroquois principle of making decisions while considering how they will affect the next seven generations. Historically, many (most?) families on Earth have had close relationships between grandparents and grandchildren. Passing down tradition and transmitting culture (e.g., stories, rituals, moral principles) over long timescales is considered important in many cultures and religions.
There is a risk of a sort of plagiarism with this kind of discourse where people take ideas that have existed for centuries or millennia across many parts of the world and then package them as if they are novel, without adequately acknowledging the history of the ideas. Thatâs like the effective altruistâs or the ethical theoristâs version of ânot invented hereâ.