I personally found it a very refreshing change of language/thinking/style from the usual EA Forum/LessWrong post, and found spending some extra effort to (hopefully) understand it worth it and highly enjoyable.
My one sentence summary/translation would be that advocating for longtermism would likely benefit on the margin from using more of a virtue ethics approach (e.g. using saints and heroes as examples) instead of a rationalist/utilitarian approach, as most people feel even less of an obligation towards future beings than towards the global poor, and many of the most altruistic people act altruistically for emotional/spiritual reasons rather than rational ones.
I could have definitely misunderstood the post though, so someone correct me if I misinterpreted it, and there are a lot more valuable points. E.g. that most people agree on an abstract level that future people matter, and that actively causing them harm is bad. So I think it claims that longtermists should focus less on strengthening that case and more on other things. Another interesting point is that to “mitigate hazards we create for ourselves” we could take advantage of the fact that “causing harm is intuitively worse than not producing benefit” for most people.
I think SummaryBot below also did a good job at translating.
There are two slightly ‘meta’ issues here in that a) I cannot help but already be working in a different style, as that is my background, which I appreciate some will find cumbersome), and b) I wanted to avoid giving ‘my’ solutions as I am interested to see how else the challenges I raise can be responded to.
I would further add only that I too recommend the SummaryBot for a TL;DR
I personally found it a very refreshing change of language/thinking/style from the usual EA Forum/LessWrong post, and found spending some extra effort to (hopefully) understand it worth it and highly enjoyable.
My one sentence summary/translation would be that advocating for longtermism would likely benefit on the margin from using more of a virtue ethics approach (e.g. using saints and heroes as examples) instead of a rationalist/utilitarian approach, as most people feel even less of an obligation towards future beings than towards the global poor, and many of the most altruistic people act altruistically for emotional/spiritual reasons rather than rational ones.
I could have definitely misunderstood the post though, so someone correct me if I misinterpreted it, and there are a lot more valuable points. E.g. that most people agree on an abstract level that future people matter, and that actively causing them harm is bad. So I think it claims that longtermists should focus less on strengthening that case and more on other things. Another interesting point is that to “mitigate hazards we create for ourselves” we could take advantage of the fact that “causing harm is intuitively worse than not producing benefit” for most people.
I think SummaryBot below also did a good job at translating.
Thank you for this.
There are two slightly ‘meta’ issues here in that a) I cannot help but already be working in a different style, as that is my background, which I appreciate some will find cumbersome), and b) I wanted to avoid giving ‘my’ solutions as I am interested to see how else the challenges I raise can be responded to.
I would further add only that I too recommend the SummaryBot for a TL;DR