This is a critically important and neglected topic, and I’m glad you wrote about it. I’ve written about this distinction before but I think you did a much better job of explaining why it matters.
Here are some more writings on the subject, along with a summary of my favorite points from each article:
The far future might be good either because (1) rational self-interested agents will make gains from trade or (2) future altruists will share my values and have power
Whether we expect (1) or (2) changes what we should do now
Natural selection might make people more selfish, but everyone is incentivized to survive no matter their values so selfish people won’t have an adaptive advantage
People who care more about the (far) future will have more influence on it, so natural selection favors them
GCR prevention only matters if they will happen soon enough
The very same, from a future perspective, applies to values-spreading.
Most people are incentivized to prevent extinction but not many people care about my/our values
This is a suspiciously antisocial approach that only works if you share Brian’s view that not only are their no moral truths for future people to (inevitably) discover, but nonetheless it is very important to promote one’s current point of view on moral questions over whatever moral views are taken in the future.
The very same, from a future perspective, applies to values-spreading.
Why do you think that? There are different values we can change that seem somewhat independent.
This is a suspiciously antisocial approach
That seems mean and unfair. Having different values than the average person doesn’t make you antisocial or suspicious; it just makes you different. In fact, I’d say most EAs have different values than average :)
The very same, from a future perspective, applies to values-spreading.
Why do you think that? There are different values we can change that seem somewhat independent.
If you spread some value and then extinction eventuates, then your work does not matter in the long run. So this doesn’t separate the two courses of action, on the long-run view.
That seems mean and unfair. Having different values than the average person doesn’t make you antisocial or suspicious; it just makes you different. In fact, I’d say most EAs have different values than average :)
That’s not how it’s antisocial. It’s antisocial in the literal sense that it is antagonistic to social practises. Basically it’s more than believing in uncommon values, it’s acting out in a way that violates what we recognise to at least be valid heuristics like not engaging in zero sum competition, and especially not moralising efforts to do so. If EAs and humanity can’t cooperate while disagreeing, it’s bad news. Calling it mean and unfair is a premature judgement.
If you mean antisocial in the literal sense, you could and should probably have clarified that originally.
If you mean it in the usual sense, where it’s approximately synonymous with labelling Brian as ‘offensive’ or ‘uncaring’, then the charge of ‘mean and unfair’ seems reasonable.
Either way you shouldn’t be surprised that someone would interpret in the usual sense and consider it unfair.
This is a critically important and neglected topic, and I’m glad you wrote about it. I’ve written about this distinction before but I think you did a much better job of explaining why it matters.
Here are some more writings on the subject, along with a summary of my favorite points from each article:
Michael Bitton: Why I Don’t Prioritize GCRs
GCR prevention only matters if they will happen soon enough
If one GCR happens first, the others don’t matter, but we don’t know which will come first
Efforts to direct humanity have a poor track record
Brian Tomasik: Values Spreading Is Often More Important than Extinction Risk
Most people are incentivized to prevent extinction but not many people care about my/our values
(Mathematical argument that I can’t really simplify but is worth reading in full)
Paul Christiano: Against Moral Advocacy
If we try to change values now, they will tend to drift
I don’t want to lock in my current values because they could be wrong
Values tend to be similar, so it is possible to pursue competing objectives with only modest losses in efficiency
Paul Christiano: Why Might the Future Be Good?
The far future might be good either because (1) rational self-interested agents will make gains from trade or (2) future altruists will share my values and have power
Whether we expect (1) or (2) changes what we should do now
Natural selection might make people more selfish, but everyone is incentivized to survive no matter their values so selfish people won’t have an adaptive advantage
People who care more about the (far) future will have more influence on it, so natural selection favors them
I’d double-upvote this if I could. Providing (high-quality) summaries along with links is a great pro-social norm!
A couple of remarks:
The very same, from a future perspective, applies to values-spreading.
This is a suspiciously antisocial approach that only works if you share Brian’s view that not only are their no moral truths for future people to (inevitably) discover, but nonetheless it is very important to promote one’s current point of view on moral questions over whatever moral views are taken in the future.
Why do you think that? There are different values we can change that seem somewhat independent.
That seems mean and unfair. Having different values than the average person doesn’t make you antisocial or suspicious; it just makes you different. In fact, I’d say most EAs have different values than average :)
If you spread some value and then extinction eventuates, then your work does not matter in the long run. So this doesn’t separate the two courses of action, on the long-run view.
That’s not how it’s antisocial. It’s antisocial in the literal sense that it is antagonistic to social practises. Basically it’s more than believing in uncommon values, it’s acting out in a way that violates what we recognise to at least be valid heuristics like not engaging in zero sum competition, and especially not moralising efforts to do so. If EAs and humanity can’t cooperate while disagreeing, it’s bad news. Calling it mean and unfair is a premature judgement.
If you mean antisocial in the literal sense, you could and should probably have clarified that originally.
If you mean it in the usual sense, where it’s approximately synonymous with labelling Brian as ‘offensive’ or ‘uncaring’, then the charge of ‘mean and unfair’ seems reasonable.
Either way you shouldn’t be surprised that someone would interpret in the usual sense and consider it unfair.
That’s not really an accurate representation, I’m trying to say that it’s anti-cooperative, which it mostly is, moreso than offensive or uncaring.