I agree that calls to āthrow out [all] attempts at doing moral calculusā are overreactions. The badness of fraud (etc.) is not a reason to reject longtermism, or earning to give, or other ambitious attempts to promote the good that are perfectly compatible with integrity and good citizenship. It merely suggests that these good goals should be pursued with integrity, rather than fraudulently.
But I do think it would be a mistake for people to only act with integrity conditional on explicit calculations yielding that result on each occasion. Rather, I think we really should throw out some attempts at doing moral calculus. (Itās never been a part of mainstream EA that you should try to violate rights for the greater good, for example. So I think the present discussions are really just making more explicit constraints on utilitarian reasoning that were really there all along.)
When folks wonder whether EA has the philosophical resources to condemn fraud etc., I think itās worth flagging three (mutually compatible) options:
(1) Some EAs accept (non-instrumental) deontic constraints, compatibly with a utilitarian-flavoured conception of beneficence that applies within these constraints.
(2) Others may at least give sufficient credence to deontological views that they end up approximating such acceptance when taking account of moral uncertainty (as you suggest).
I donāt think that (2) alone will necessarily do much, because I donāt think we really ought to give much credence to deontology. I give deontology negligibleāmuch less than 1%--credence myself (for many reasons, including this argument). Since my own uncertainty effectively only ranges over different variants of consequentialism, this uncertainty doesnāt seem sufficient to make much practical difference in this respect.
So Iām personally inclined to put more weight on heuristics that firmly warn against fraud and dishonesty on instrumental grounds. Iām not sure whether this is what youāre dismissing as ābarely-evidenced heuristicsā, as I think our common-sense grasp of social life and agential unreliability actually provides very strong grounds (albeit not explicit quantitative evidence) for believing that these particular heuristics are more reliable than first-pass calculations favouring fraud. And the FTX disaster (at least if truly motivated by naive utilitarianism, which I agree is unclear) constitutes yet further evidence in support of this, and so ought to prompt rethinking from those who disagree on this point.
As you say, we can see that any recent fraudulent actions were not truly rational (or prudent) given utilitarian goals. But thatās a point in support of explanation (3), not (2).
I think it is more interesting to think about other people as of rational agents. If bitcoin grew to 100K as it was widely expected in 2021, SBF bets will pay off and he will become the first trillioner. He will also be able to return all money he took from creditors.
He may understood that there was only like 10 per cent chance to become trillioner, but if he thought that trillion dollars for preventing x-risks is the only chance to save humanity, then he knew that he should bet on this opportunity.
Now we live in a timeline where he lost and it is more tempting to say that he was irrational or mistaken. But maybe he was not.
I agree that calls to āthrow out [all] attempts at doing moral calculusā are overreactions. The badness of fraud (etc.) is not a reason to reject longtermism, or earning to give, or other ambitious attempts to promote the good that are perfectly compatible with integrity and good citizenship. It merely suggests that these good goals should be pursued with integrity, rather than fraudulently.
But I do think it would be a mistake for people to only act with integrity conditional on explicit calculations yielding that result on each occasion. Rather, I think we really should throw out some attempts at doing moral calculus. (Itās never been a part of mainstream EA that you should try to violate rights for the greater good, for example. So I think the present discussions are really just making more explicit constraints on utilitarian reasoning that were really there all along.)
When folks wonder whether EA has the philosophical resources to condemn fraud etc., I think itās worth flagging three (mutually compatible) options:
(1) Some EAs accept (non-instrumental) deontic constraints, compatibly with a utilitarian-flavoured conception of beneficence that applies within these constraints.
(2) Others may at least give sufficient credence to deontological views that they end up approximating such acceptance when taking account of moral uncertainty (as you suggest).
(3) But in any case, we shouldnāt believe that fraud and bad citizenship are effective means to promoting altruistic goals, so even pure utilitarianism can straightforwardly condemn such behaviour.
I donāt think that (2) alone will necessarily do much, because I donāt think we really ought to give much credence to deontology. I give deontology negligibleāmuch less than 1%--credence myself (for many reasons, including this argument). Since my own uncertainty effectively only ranges over different variants of consequentialism, this uncertainty doesnāt seem sufficient to make much practical difference in this respect.
So Iām personally inclined to put more weight on heuristics that firmly warn against fraud and dishonesty on instrumental grounds. Iām not sure whether this is what youāre dismissing as ābarely-evidenced heuristicsā, as I think our common-sense grasp of social life and agential unreliability actually provides very strong grounds (albeit not explicit quantitative evidence) for believing that these particular heuristics are more reliable than first-pass calculations favouring fraud. And the FTX disaster (at least if truly motivated by naive utilitarianism, which I agree is unclear) constitutes yet further evidence in support of this, and so ought to prompt rethinking from those who disagree on this point.
As you say, we can see that any recent fraudulent actions were not truly rational (or prudent) given utilitarian goals. But thatās a point in support of explanation (3), not (2).
I think it is more interesting to think about other people as of rational agents. If bitcoin grew to 100K as it was widely expected in 2021, SBF bets will pay off and he will become the first trillioner. He will also be able to return all money he took from creditors.
He may understood that there was only like 10 per cent chance to become trillioner, but if he thought that trillion dollars for preventing x-risks is the only chance to save humanity, then he knew that he should bet on this opportunity.
Now we live in a timeline where he lost and it is more tempting to say that he was irrational or mistaken. But maybe he was not.