I still broadly endorse this post. Here are some ways my views have changed over the last 6 years:
At the time I wrote the OP I considered consequentialist evaluation the only rubric for judging principles like this, and the only reason we needed anything else was because of the intractability of consequentialist reasoning or moral uncertainty. I’m now more sympathetic to other moral intuitions and norms, and think my previous attempts to shoehorn them into a consequentialist justification involve some motivated cognition and philosophical error.
That said I’m now more sympathetic to evidential cooperation in large worlds and a bit less confused about decision theory. So overall I’m more convinced by a range of consequentialist arguments for common-sense moral judgments, including the principle expressed in this post. I don’t think this is the most important form of justification, but it does slightly strengthen those intuitions and plays a role when trying to clarify them in weird cases (e.g. when considering our obligations towards AI systems rather than humans).
I’m more hesitant about retaliation than when I wrote the OP, and am mostly unwilling to “do malicious things that have no direct good consequences for me” except in cases where people have opted in to retaliation for bad behavior (e.g. by agreeing to a contract or putting down a deposit).
Although I still endorse this post and think that some relevant arguments have become stronger, I’m more sensitive to a bunch of ways it’s complicated and incomplete. Overall I have less conviction about everything in this space. I do still try to just behave with integrity in a straightforward way, and do think that this is an unusually robust ethical conclusion despite acknowledging more uncertainty about it.
I still broadly endorse this post. Here are some ways my views have changed over the last 6 years:
At the time I wrote the OP I considered consequentialist evaluation the only rubric for judging principles like this, and the only reason we needed anything else was because of the intractability of consequentialist reasoning or moral uncertainty. I’m now more sympathetic to other moral intuitions and norms, and think my previous attempts to shoehorn them into a consequentialist justification involve some motivated cognition and philosophical error.
That said I’m now more sympathetic to evidential cooperation in large worlds and a bit less confused about decision theory. So overall I’m more convinced by a range of consequentialist arguments for common-sense moral judgments, including the principle expressed in this post. I don’t think this is the most important form of justification, but it does slightly strengthen those intuitions and plays a role when trying to clarify them in weird cases (e.g. when considering our obligations towards AI systems rather than humans).
I’m more hesitant about retaliation than when I wrote the OP, and am mostly unwilling to “do malicious things that have no direct good consequences for me” except in cases where people have opted in to retaliation for bad behavior (e.g. by agreeing to a contract or putting down a deposit).
Although I still endorse this post and think that some relevant arguments have become stronger, I’m more sensitive to a bunch of ways it’s complicated and incomplete. Overall I have less conviction about everything in this space. I do still try to just behave with integrity in a straightforward way, and do think that this is an unusually robust ethical conclusion despite acknowledging more uncertainty about it.
What arguments/evidence caused you to be more hesitant about retaliation?