Red-team—“Are longtermism and virtue ethics actually compatible?”
A convincing red-team wouldn’t need a complex philosophical analysis, but rather a summary of divergences between the two theories and an exploration of five or six ‘case studies’ where consequentialist-type behaviour and thinking is clearly ‘unvirtuous’.
Explanation—Given just how large and valuable the long-term future could be, it seems plausible that longtermists should depart from standard heuristics around virtue. For instance, a longtermist working in biosecurity who cares for a sick relative might have good consequentialist reasons to abandon their caring obligations if a sufficiently promising position came up at an influential overseas lobbying group. I don’t think EAs have really accepted that there is a tension here; doing so seems important if we are to have open, honest conversations about what EA is, and what it should be.
To provide a relevant anecdote to the Benjamin Todd thread, (n = 1, of course) I had known about EA for years, and agreed with the ideas behind it. But the thing that got me to actually take concrete action about it was that I joined a group that, among other things, asked its members to do a good deed each day. Once I got into the habit of doing good deeds, (and, even more importantly, actively looking for opportunities to do good deeds) however small or low-impact, I began thinking about EA more, and finally committed to try giving 10% for a year, then signing the pledge.
Without pursuing classical virtue, I would be unlikely to be involved in EA now. My agreement with EA philosophically remained constant, but my willingness to act on a moral impulse was what changed. I built the habit of going from “Someone should do something” to “I should do something” with small things like stopping to help a stranger with a heavy box, and that transferred to larger things like donating thousands of dollars to charity.
Thus, I am interested in the intersection of EA and virtue and how they can work together. EA requires two things—philosophical agreement, and commitment to action. In my case, virtue helped bridge the gap between 1 and 2.
Red-team—“Are longtermism and virtue ethics actually compatible?”
A convincing red-team wouldn’t need a complex philosophical analysis, but rather a summary of divergences between the two theories and an exploration of five or six ‘case studies’ where consequentialist-type behaviour and thinking is clearly ‘unvirtuous’.
Explanation—Given just how large and valuable the long-term future could be, it seems plausible that longtermists should depart from standard heuristics around virtue. For instance, a longtermist working in biosecurity who cares for a sick relative might have good consequentialist reasons to abandon their caring obligations if a sufficiently promising position came up at an influential overseas lobbying group. I don’t think EAs have really accepted that there is a tension here; doing so seems important if we are to have open, honest conversations about what EA is, and what it should be.
I would be interested in this one.
To provide a relevant anecdote to the Benjamin Todd thread, (n = 1, of course) I had known about EA for years, and agreed with the ideas behind it. But the thing that got me to actually take concrete action about it was that I joined a group that, among other things, asked its members to do a good deed each day. Once I got into the habit of doing good deeds, (and, even more importantly, actively looking for opportunities to do good deeds) however small or low-impact, I began thinking about EA more, and finally committed to try giving 10% for a year, then signing the pledge.
Without pursuing classical virtue, I would be unlikely to be involved in EA now. My agreement with EA philosophically remained constant, but my willingness to act on a moral impulse was what changed. I built the habit of going from “Someone should do something” to “I should do something” with small things like stopping to help a stranger with a heavy box, and that transferred to larger things like donating thousands of dollars to charity.
Thus, I am interested in the intersection of EA and virtue and how they can work together. EA requires two things—philosophical agreement, and commitment to action. In my case, virtue helped bridge the gap between 1 and 2.