A better society shortcuts away the “longterm consequences versus rules” dilemma
Consider these events:
In 1859, the abolitionist John Brown orchestrated an assault on a federal weapons arsenal, violating many laws and killing an innocent person, in order to obtain guns for a slave revolt.
In 1917, the Communist political party known as the Bolsheviks started a violent uprising in Russia, violating many laws, provoking a civil war before inflicting massive scale atrocities against innocent people, in order to implement a stateless, classless society.
In 2022, Samuel Bankman-Fried committed wire fraud and conspiracy against innocent people, possibly in order to raise money for philanthropic and political causes.
What these events have in common is that people were willing to break laws and inflict immediate harm in order to pursue some kind of greater good. Another thing they have in common, though people may not always be willing to admit it, is that they make a degree of ethical and rational sense. If you lived in America in 1859 you could easily decide that a slave revolt was a way to make life much better for African-Americans, and then it could plausibly make sense for you to attack a federal arsenal and try to steal guns to give to slaves. If you lived in Russia in 1917 you could easily decide that the forces of historical materialism and moral decency demanded the establishment of a communist society, and then it could plausibly make sense for you to use military force against the authorities and aristocrats who implacably opposed that goal. And if you live in America in 2022 you can easily decide that some world-saving philanthropic programs are far underfunded compared to others, and then it can plausibly make sense for you to manipulate financial markets into giving you the funding you want.
To be clear, I don’t think these actually were good ideas. They were certainly bad ideas in hindsight, but even without hindsight some people knew better. In John Brown’s time there were many abolitionists who nonetheless argued against his violent methods, in the Bolsheviks’ time there were many communists who opposed violent methods, and pre-2022 there were many effective altruists who condemned the idea of breaking the law for the sake of altruism. But this “many” doesn’t always persuade everyone. And sometimes the rulebreakers actually turn out to be right; we can easily find examples where norm- or rule-breakers did comfortably achieve their goals (such as “great” imperial conquerors, and colonial revolutionaries). The unpleasant truth is that kindly following rules and norms isn’t always the most rational way to achieve a goal, and that’s probably the biggest reason why the champions of movements like abolition, communism, and effective altruism have never been able to persuade all of their compatriots to please be nicer when fighting for the cause. Even while the rulebreaking is a bad idea, it’s not such a patently terrible idea that it can be written off as purely the product of individual stupidity or vice. The political or philanthropic motivation plays a role as well.
However, not every movement ends up like this. For example, the YIMBY movement which advocates libertarian land use for the purpose of growing the economy and reducing poverty has not tried to detonate bombs in cities, even though such a thing does lead to more new construction and economic activity. (They could try to render buildings structurally unsound, thus mandating their demolition, without probably killing anyone.) Nor have YIMBYs attempted to circumvent the law in order to construct illegal new housing. And unlike the EA movement, in the YIMBY movement disavowing such behavior is not even a topic of much discussion. YIMBYs simply don’t consider it in the first place.
The difference between movements like slavery abolition, communism, and EA on one hand, and YIMBYism and other ordinary political movements on the other hand, is that the latter’s goals are more closely aligned with the rest of society. If society writ large shares your goals, then you will not want to harm society. A YIMBY will look at a homeowner and see that this person’s life and career still contribute to the YIMBY’s overall goals of growing the economy and reducing poverty, so short-term harm to the homeowner would undermine the YIMBY’s goals. But a communist can look at a landowner and see that his life and career are nothing but obstacles to the objectives of communism, thus making the landowner dispensable.
So one can imagine what would happen in a society where radical views like abolition, communism and effective altruism are not outliers at all. If America had not entrenched slavery so strongly or practiced it so harshly, then John Brown might not have raided Harpers Ferry. If Russia’s government and landowners had not been so incompetent and implacable then the Bolsheviks might not have started a civil war. And if mainstream American government agencies and philanthropists had given higher priority to important/neglected/tractable cause areas then Bankman-Fried might not have engaged in wire fraud. I don’t mean “radical people will become more cooperative out of gratitude if you give them their way,” I mean this in a more pragmatic sense. Inflicting short-term harm to achieve a greater good no longer makes sense when the rest of society is already progressing toward that greater good—not only because that greater good is more likely to be achieved without any aid from the rulebreaker, but also because the people harmed in the short run are more likely to be contributing toward that greater good.
To show what I mean in the extreme case, consider whether an EA would steal from another EA. Of course that could not make the world better, because the benefits of funding one EA project would be immediately outweighed by the harm caused by impoverishing another. Also, if everyone were in the EA movement, then all the especially high impact projects would get funded and any unfunded charities would not be astronomically more impactful than something like taxes (the taxes themselves presumably going to good government programs). Clearly, if everyone was an EA, then there would be no altruistic motivation to commit wire fraud or other crimes.
Of course, not everyone will be EA for at least the foreseeable future, and EAs can still judge that their own causes are better than other EA causes. But the idea of breaking the rules for the greater good would clearly become much less attractive if more people and institutions acted in ways closer to those of effective altruism, since some victims would lose their ability to support high-impact projects, and some proper reliable funding would be going to solve the highest-impact causes.
Now, I don’t mean to say “more people should become effective altruists in order to reduce the risk of wire fraud (etc),” as that is obviously a rather ineffective way to reduce wire fraud (etc). Also, I don’t mean to say that people deserve to be harmed or deceived just because they are landlords, not effective altruists, etc. What I mean is that EA and longtermism are not only desirable for direct moral reasons, they—like abolitionism—are also perfectly adequate principles for a harmonious and rule-following society as long as they are sufficiently popular. In a world where, say, 10% of society actively supported EA and/or longtermism, and most philanthropic foundations and government agencies paid a modest degree of attention to these principles, the idea of rulebreaking in favor of EA or longtermism would make much less sense.
So a strong belief in longtermism doesn’t inherently give rise to rulebreaking, as Emile Torres claims in his attacks against EAs and longtermists, it’s simply the sharp disparity in values between longtermists and everybody else. I suspect that if this were a world governed by longtermism, then a small number of “shorttermists” would engage in rulebreaking behavior of their own, such as engaging in fraud to help themselves or their immediate communities. In fact, this is not unlike the problem we see in real life where the (relatively speaking) long term and collectivist goals of governments are ignored by all manner of criminals who break the law in pursuit of their more immediate and parochial interests.
Now, for many people, the idea of EA and longtermism getting much bigger and more influential does not seem likely, or even desirable. So the argument “if only longtermism were more popular, then your objection to it would be resolved” may not be compelling. However, there are humbler options for win-wins to be achieved. People don’t need to like all about EA in order to see the wisdom in many more reasonable ideas:
People should give more to charity.
Donors and governments should pay more attention to the impact achieved by charities.
We need to work more to reduce global disease and severe poverty relative to our efforts in more traditional charities.
We should try to pick jobs which have a positive ethical impact.
If humanity becomes incredibly advanced, rich, and happy, then we should increase our population so that there can be more people enjoying that life. Pay parents much more in compensation for the difficulty of childraising. Basically, pronatalism of some modest sort.
If we spread animal life all over the universe then we should do it in a cautious manner that ensures the animals will have pretty good lives in the new ecosystems that we create.
Institutions should more seriously take into consideration the ways that future generations will be impacted by their decisions.
Institutions should focus less on the national good, and focus more on the global good.
People with weird ideas about potentially important issues should be taken more seriously. Less gatekeeping (e.g. trying to wreck the reputations of AGI Cassandras), more hits-based giving. Institutions should be more responsive to random groups of smart people on the Internet with new proposals; a non-EA example would be if NASA and Congress could listen to random people with better ideas about space travel. And yes sometimes it leads to the wrong ideas (heterodox economics?) but the good outweighs the bad.
Governments should work together more to reduce existential risk.
If people want to make use of transhuman or posthuman technologies then they should be legally allowed to do so.
This stuff is bigger than the EA movement and I think it would tend to be applauded by the majority of people. Our institutions and culture need to react to the changed modern landscape: more awareness of our places in a global society, more types of charitable projects to consider, more information about charitable impacts, more technological risks and theories about the long-run future, more and better ideas surfacing through nonstandard channels. And our institutions are especially not keeping pace with the changes. With or without EA, it is objectively healthy for our institutions to adjust in this direction. Just like it would have been objectively healthy for the institutions of southern America in the 1850s, or Imperial Russia in the 1910s, to have adjusted to be more democratic or to take the welfare of their people more seriously. I think most of the above list of changes will happen sooner or later, and when they do, longtermists will fret less about whether their comrades are following the rules.
Of all things, I most greatly want to live in a world where effective altruism and longtermism are so popular that I can happily overpay on my taxes or go out of my way with generosity for strangers, without having to worry that I am ignoring important causes for the sake of silly ones. Such a world would be devoid of politically and philanthropically motivated rulebreaking—until such time as a new ideology would arise in opposition to the effective altruist and longtermist mainstream, and the cycle of human conflict would repeat itself.
“Samuel Bankman-Fried committed wire fraud and conspiracy against innocent people in order to raise money for philanthropic and political causes”
I wish people would stop saying this as if it were true—this idea that Sam’s primary motivation was all about raising money for EA or ‘the greater good’ more broadly. As Eliezer pointed out, “The amount that FTX spent on e-sports naming rights for TSM was greater than everything they donated to effective altruism”, to give just one example of a decidedly non-philanthropic-or-political cause that he apparently spent tons of money on. The fact that SBF was an EA fanboy just isn’t enough to give very much weight to the claim “if mainstream American government agencies and philanthropists had given higher priority to important/neglected/tractable cause areas then Bankman-Fried might not have engaged in wire fraud”, at all.
”So a strong belief in longtermism doesn’t inherently give rise to rulebreaking… it’s simply the sharp disparity in values between longtermists and everybody else. “
This assumption of an association between longtermism and rulebreaking is something we should be pushing back on, unless there is strong evidence for it. We shouldn’t be accepting the assumption as true and then trying to explain it post-hoc.
Thank you for the important clarification regarding FTX.
I still think the idea of longtermism → rulebreaking is something philosophically meaningful that merits my response even if FTX does not serve as a true example.
I worry that this post is claiming that EAs are uncommonly likely to recommend rules violations in order to achieve their goals (ie, ends justify the means). I don’t think that’s true, and I generally see EAs as trying very hard to be scrupulous and do right by all involved parties.
Concretely, I believe that if you went to an EA conference or a similar gathering and presented people with prisoners dilemma issues, or just lost wallets, they would behave more pro-socially than average for the country.
I think that the FTX collapse is a very salient example of EA folks committing crime (perhaps in the belief that the ends justified it?), but that doesn’t mean that EA increases the probability of crime.