Hillary Greaves and MacAskill have postulated that that decision making in EA must be premised on two factors: (a) every option that is near-best overall is near-best for the far future; and (b) every option that is near-best overall presents significantly more benefits to far future people, than it does to near future people.[26]
Point of information: I don’t think that they’ve said that all decision making in EA should be based on axiological or deontic strong longtermism, which is specifically what that paper is about.
Point of information: I don’t think that they’ve said that all decision making in EA should be based on axiological or deontic strong longtermism, which is specifically what that paper is about.