Impartiality. Maybe Iโm more biased towards rats/โEAs, but not in ways that seem likely to be decision relevant?
You could construct thought experiments in which I wouldnโt behave in an ideal utilitarian way, but for scenarios that actually manifest in the real world, I think I can be approximated as following some strain of preference utilitarianism?
by helping other people as much as possible, without any expectation of your favours being returned in the near future โ you end up being much more successful, in a wide variety of settings, in the long run.
This is what you mention, and I agree with it. But
if you and I share the same values, the social situation is very different: if I help you achieve your aims, then thatโs a success, in terms of achieving my aims too. Titting constitutes winning in and of itself โ thereโs no need for a tat in reward. For this reason, we should expect very different norms than we are used to be optimal: giving and helping others will be a good thing to do much more often than it would be if we were all self-interested.
One of the incredible strengths of the EA community is that we all share values and share the same end-goals. This gives us a remarkable potential for much more in-depth cooperation than is normal in businesses or other settings where people are out for themselves. So next time you talk to another effective altruist, ask them how you can help them achieve their aims. It can be a great way of achieving what you value.
I really think altruism/โvalue-alignment is a strength, and a group would lose a lot of efficiency by not valuing it.
(Of course, itโs not the only thing that matters)
Empirically it feels hard to get much credit/โegoist-value from helping people? Maybe your experience has just been different. But I donโt find helping people very helpful for improving my status.
Iโm a rationalist.
I take scope sensitivity very seriously.
Impartiality. Maybe Iโm more biased towards rats/โEAs, but not in ways that seem likely to be decision relevant?
You could construct thought experiments in which I wouldnโt behave in an ideal utilitarian way, but for scenarios that actually manifest in the real world, I think I can be approximated as following some strain of preference utilitarianism?
Iโm trying to question
In the abstract, rather than talking about you specifically.
Some quotes helping other altruists:
This is what you mention, and I agree with it.
But
I really think altruism/โvalue-alignment is a strength, and a group would lose a lot of efficiency by not valuing it.
(Of course, itโs not the only thing that matters)
Empirically it feels hard to get much credit/โegoist-value from helping people? Maybe your experience has just been different. But I donโt find helping people very helpful for improving my status.
Have you read How to Win Friends and Influence People? Iirc more than half the book is about taking an interest in other people, helping them, etc.