There’s one huge difference between aiming to do good and aiming to make profit. If you set up a company aiming to make money, generally the very worst that can happen is that you go bankrupt; there’s a legal system in place that prevents you from getting burdened by arbitrarily large debt. However, if you set up a project aiming to do good, the amount of harm that you can do is basically unbounded.
I am very confused about this reasoning. It seems clear that there is a lot worse harm that can be caused by a for-profit enterprise than simply that enterprise going bankrupt. What about weapons manufacturers or fossil fuel and tobacco companies? There are many industries that profit from activities that many people would consider a net harm to humanity.
The key difference I see with a non-profit enterprise aiming to do good is that its scale is driven by external factors, the choices of donors. The harm a non-profit can cause is bounded simply because the funding it receives from its donors is bounded. In contrast, a successful for profit enterprise has a mechanism to scale itself up by using its profits to grow the enterprise. The practical implication of this is for profit corporations do end up growing to scales where they have great potential to do a lot of harm.
None of which is to say that the effective altruism movement, which as you indicate is expecting many billions of USD funding, doesn’t have great potential to do harm. It does need to take that responsibility seriously. Perhaps more importantly though, seeing as the EA movement is encouraging people to earn to give, it behooves the EA movement to consider harms caused in the process of earning. Moskovitz’s wealth derives from Facebook which arguable has done great harm globally by, among many other things, helping organize the genocide of Rohingya muslims in Myanmar and Jan 6 capitol insurrection in US. Bankman-Fried’s wealth derives from arbitrage on bitcoin sales and other crypto related ventures. Cryptocurrency wealth currently has a massive externality of CO2 emissions produced by running energy intensive proof-of-work algorithms on fossil power. Bankman-Fried isn’t responsible for emissions to equivalent if he generated all his wealth by mining bitcoin on fossil power, but he is certainly responsible for some fraction (50%? 20%?) of that.
Maybe if the EA community is judicious in allocating the capital Moskovitz and Bankman-Fried are planning to provide them with it will become the quite clear the benefits of that earning outweighed their harms. The funding of carefully considered public health measures and efforts to give directly to people living in poverty raises my confidence the EA community has a chance of achieving this. However, funding of efforts to mitigate the existential risk of imagined future AIs while ignoring the funding of institutes like the Algorithmic Justice League which seek to understand harms already being caused by existing AI algorithms lowers my confidence.
There’s one huge difference between aiming to do good and aiming to make profit. If you set up a company aiming to make money, generally the very worst that can happen is that you go bankrupt; there’s a legal system in place that prevents you from getting burdened by arbitrarily large debt. However, if you set up a project aiming to do good, the amount of harm that you can do is basically unbounded.
I am very confused about this reasoning. It seems clear that there is a lot worse harm that can be caused by a for-profit enterprise than simply that enterprise going bankrupt. What about weapons manufacturers or fossil fuel and tobacco companies? There are many industries that profit from activities that many people would consider a net harm to humanity.
I think it was good that you noticed your confusion! In this case, I believe your confusion primarily stems from misunderstanding the paragraph. Will is not saying that the worst that a company can do from the impartial point of view is go bankrupt. He’s saying that the worst that a company can do from a profit-maximizing perspective (“aiming to make profit”) is go bankrupt. Whereas (EA) charities are presumed to be judged from the impartial point of view, in which case it will be inappropriate to ignore the moral downsides.
To be clear, stating that I was confused was a polite way of indicating that I think this reasoning itself is confused. Why should we evaluate for-profit businesses only from a profit-maximizing perspective? Having profitability as the primary goal of enterprise doesn’t preclude that enterprise from doing massive amounts of harm. If a for-profit enterprise does harm in its attempts to make profit should we ignore that harm simply because they’ve succeeded in turning a profit? If you interpretation of Will’s reasoning is what he intended, then he is asking us compare aiming to do good and aiming to make profit by evaluating each on different criteria. Generally, this is a misleading way of making comparisons.
This is important because this sort of reasoning is used to justify an uncircumspect version of encouraging people to earn to give that I see coming from this community. As I’ve seen it, the argument goes that in your career your should focus on acquiring wealth rather than doing good because you can then use that wealth to do good by being an effective altruist. But this ignores that you can potentially do more harm in acquiring great wealth than you can compensate for by using your wealth altruistically.
As I mentioned, I think there are some good reasons to believe the money Bankman-Fried and Moskovitz are contributing to the EA community was acquired in ways that may have caused significant harm. This doesn’t mean the EA community should reject this money, and if the money is used judiciously it may well mean the Bankman-Fried and Moskovitz’s successes in acquiring wealth may will be net positives for humanity. But it is also not hard to think of examples where so much harm was done in acquiring the funding of a charitable organization even if the charity does a lot of good it won’t be a net positive. For example, consider the Sackler family, who in the process of acquiring their wealth, were happy to ignore that they were creating tens of thousands of opioid addicts. Their charitable work (mostly donating to already well-funded museums as far as I know) also probably also wouldn’t be evaluated well by the EA community. But the real harm happened even before they attempted to clean up their reputations through charitable donations.
In light of harms that can be caused in acquiring wealth, I think the EA community should be more circumspect about encouraging people to earn to give. Succeeding in business to maximize your positive impact through charitable work isn’t necessarily bad advice, but you do need account for the harms businesses can do.
The way I read it, Will is comparing the challenge of doing good to the challenge of earning money (for its own sake). Not, as you assume, to the challenge of doing good by earning money.
The point is that if we try to learn any lessons from people who are optimizing to earn money, we’ll need to keep in mind that we have reason to be more risk-averse than they are.
If someone from High-Impact Athletes writes a post outlining what we can learn from deliberate practice in professional sports, and mentions salient differences between how metrics work in sports training and EA training as a caveat to this advice, I do not think the correct takeaway is “this whole discussion is irrelevant to EA because sports are of ~0 impact for the world, and probably net negative anyway.”
What did I say warranted this comparison? I am not saying anything anyone has said is irrelevant to EA. I am saying argument Will made that for-profit businesses can’t do much harm because their worst outcome is bankruptcy is misleading. I am also saying that encouraging people to earn to give without considering that harm can by done by earning is one of the ways the EA movement could end up having a net negative impact.
At first I thought you genuinely misread his comment, and politely corrected you. And then your next comment (which did not thank me, or acknowledge the correction) suggested you deliberately misread it so you can get on your hobbyhorse about something else. So I tried to illustrate why this is not an appropriate strategy with an analogy. Perhaps my issue was that I being too metaphorical and insufficiently direct earlier. In that case I’m sorry for the lack of clarity in communication.
I am very confused about this reasoning. It seems clear that there is a lot worse harm that can be caused by a for-profit enterprise than simply that enterprise going bankrupt.
There’s several extremely bad outcomes of bad charities:
One example is in the footnotes and where people actually died[1].
Another famous example is a pumping system for developing countries that consumed donor money and actively made it more difficult to get water.
It’s not clear anything would have stopped these people besides their own virtue or self awareness, or some kind of press attention. The effect of first world faculty and wealth is overwhelming and can trample all kinds of safeguards (a American volunteer blew the whistle on the first case, a fact that is thought provoking and informative, after hundreds of children and actual doctors passed through the clinic).
What about weapons manufacturers or fossil fuel and tobacco companies? There are many industries that profit from activities that many people would consider a net harm to humanity. Structurally, in a business, help or harm isn’t related to the main activities of the business.
In a real business, overwhelming effort is being made to make sure the business is successful. In the hundred trillion dollar world economy, almost no one is paying money to help or harm people.
For any given amount of money, you can do tremendous more harm and kill people with the meme of doing good, than running a business, even in situations where there aren’t many functioning institutions.
I suspect the truth is more complicated than these articles suggest. I think being a woman, blonde, American and blogging/Instagram played a role in the reason this person is being read about on US national media—the implication I’m making is that this might be happening all the time.
It’s incredibly, deceptively hard to accomplish anything in very different cultures/economies/societies, much less cost effectively. Achieving this is possible, but rare and hard and undervalued.
My point wasn’t that charities are incapable of doing harm. There are many examples of charities doing harm as you point out. The point is that reasoning that non-profits have more potential to cause harm that for-profit seems to ignore that many for-profit enterprises operate at much larger scale than any non-profits and do tremendous amounts of harm.
In a real business, overwhelming effort is being made to make sure the business is successful. In the hundred trillion dollar world economy, almost no one is paying money to help or harm people.
Yes, most successful business are primarily focused on making profit rather doing good or harm. But this doesn’t mean they aren’t willing to do harm in the pursuit of profit! If someone dedicates an amount of money to business that then grows it large enough to do lots of harm, even if as a side-effect, it’s quite conceivable they could accomplish more total harm than someone simply dedicating that same money to a directly harmful (but not profitable) venture.
The point is that reasoning that non-profits have more potential to cause harm that for-profit seems to ignore that many for-profit enterprises operate at much larger scale than any non-profits and do tremendous amounts of harm
You’re absolutely right. For profits absolutely do harm. In general “capitalism has really huge harms”, almost every EA or reader here would agree (note that I’m not an EA necessarily or represent EA thought).
The scale is the point here—you’re also exactly right. For many activities, it makes many, many millions to create a situation where we are harming people.
To be tangible, it’s hard to think of any business you or I could setup that would be that harmful as posing as a fake charity and disrupting medical service or food supply.
Well, I’ve actually sort of slipped into another argument about scale and relative harm, and got you to talk about that.
But that doesn’t respond to your original point, that businesses can do huge harm and EA needs to account for that. So that’s unfair to you.
Trying to answer your point, and using your view about explicitly weighing and balancing harms, there’s another point about “counterfactual harm” that responds to a lot of your concerns.
In the case of a crypto currency company:
If you make a new make crypto company, and become successful by operating a new exchange, even if you become the world’s biggest exchange, it’s unclear how much that actually caused any more mining (e.g. by increasing Bitcoin’s price).
There’s dozens of exchanges already, besides the one you created. So it’s not true that you can assign or attribute 20% or 50% of emissions to money, just from association.
In reality, I think it’s reasonable that the effect is small, so even if the top #1 trading platform wasn’t founded, almost the same amount of mining would occur. (If you track cryptocurrency prices, it seems plausible that no one cares that much about the quality of exchanges).
So the money that would have gone to your platform and been donated to charity, would buy yachts for someone else instead.
(By the way—as part of your crypto currency company, if you make and promote a new cryptocurrency that doesn’t mine, and “stakes” instead, then your cryptocurrency company might accelerate the transition to “staking”, which doesn’t produce greenhouse gasses like mining. Your contribution to greenhouse gasses is negative despite being a crypto company. But I share the sentiment that you can totally roll your eyes at this idea, let’s just leave this point here.)
You mentioned other concerns about other companies. I think it’s too difficult for me to respond, for reasons that aren’t related to the merit of the concern.
Since we agree scale is a key part of this, I don’t know how you can be confident that an imagined fake charity that disrupts medical service or food supply would ever be large enough to equal scale the harms caused by some of the most powerful global corporations. In this era of extreme wealth inequality it’s plausible that some billionaire could accrue massive personal wealth and then transfer that wealth to a non-profit, which, freed from the shackles of having to make a profit, could focus solely on doing harm. But equally that billionaire could found an institution focused on turning a profit while doing harm and use the profits to grow the institution to a scale and concomitant harm that far exceeds what they would have been able to achieve with a non-profit.
For example, if we are going to imagine a fake that charity that disrupts delivery of a medical service, why don’t we imagine that they do this by acting as a middleman that extracts a profit by reselling medical supplies for an unconscionable profit. This profit, in turn enables them to grow and slip their “services” between more people and their medical providers. While this may seem like a criminal enterprise, for many companies that exist today this basically their business model, and they operate at scales that eclipse most medical non-profits I know of.
Being a non-profit does provide good cover for operating in harmful fashion, but growth through accumulation of capital is very power mechanism—I don’t think we should be surprised to find the largest harm causing institutions use it as their engi
I don’t know how you can be confident that an imagined fake charity that disrupts medical service or food supply would ever be large enough to equal scale the harms caused by some of the most powerful global corporations.
But we’re talking about the relative harm of a bad new charity compared to a harmful business.
I think you agree it doesn’t make sense to compare the effect of our new charity versus, literally all of capitalism or a major global corporation.
But equally that billionaire could found an institution focused on turning a profit while doing harm and use the profits to grow the institution to a scale and concomitant harm that far exceeds what they would have been able to achieve with a non-profit.
Let’s be honest, we both know perfectly well, that your view and understanding of the world is that, if a business could make significantly more profit being evil, it would already be doing it. That niche would be filled. I probably agree.
But if that’s true, it must be that even an amoral business person could not make profit by doing the same evil—all the evil capitalists got there first. So there’s no evil super harm possible as described in your story.
why don’t we imagine that they do this by acting as a middleman that extracts a profit by reselling medical supplies for an unconscionable profit. This profit, in turn enables them to grow and slip their “services” between more people and their medical providers. While this may seem like a criminal enterprise, for many companies that exist today this basically their business model, and they operate at scales that eclipse most medical non-profits I know of.
Yes, basically, we sort of both agree this is happening.
The difference between our opinions is that, I think in healthy marketplaces, this profit seeking is extremely positive and saves lives (ugh, I sound like Kevin Murphy.)
Also, we both know that there’s not going to be any way to agree or prove eachother wrong or right about this specific issue.
And we’re really really far from the point here (and I think it’s better addressed by my other comment).
I am very confused about this reasoning. It seems clear that there is a lot worse harm that can be caused by a for-profit enterprise than simply that enterprise going bankrupt. What about weapons manufacturers or fossil fuel and tobacco companies? There are many industries that profit from activities that many people would consider a net harm to humanity.
The key difference I see with a non-profit enterprise aiming to do good is that its scale is driven by external factors, the choices of donors. The harm a non-profit can cause is bounded simply because the funding it receives from its donors is bounded. In contrast, a successful for profit enterprise has a mechanism to scale itself up by using its profits to grow the enterprise. The practical implication of this is for profit corporations do end up growing to scales where they have great potential to do a lot of harm.
None of which is to say that the effective altruism movement, which as you indicate is expecting many billions of USD funding, doesn’t have great potential to do harm. It does need to take that responsibility seriously. Perhaps more importantly though, seeing as the EA movement is encouraging people to earn to give, it behooves the EA movement to consider harms caused in the process of earning. Moskovitz’s wealth derives from Facebook which arguable has done great harm globally by, among many other things, helping organize the genocide of Rohingya muslims in Myanmar and Jan 6 capitol insurrection in US. Bankman-Fried’s wealth derives from arbitrage on bitcoin sales and other crypto related ventures. Cryptocurrency wealth currently has a massive externality of CO2 emissions produced by running energy intensive proof-of-work algorithms on fossil power. Bankman-Fried isn’t responsible for emissions to equivalent if he generated all his wealth by mining bitcoin on fossil power, but he is certainly responsible for some fraction (50%? 20%?) of that.
Maybe if the EA community is judicious in allocating the capital Moskovitz and Bankman-Fried are planning to provide them with it will become the quite clear the benefits of that earning outweighed their harms. The funding of carefully considered public health measures and efforts to give directly to people living in poverty raises my confidence the EA community has a chance of achieving this. However, funding of efforts to mitigate the existential risk of imagined future AIs while ignoring the funding of institutes like the Algorithmic Justice League which seek to understand harms already being caused by existing AI algorithms lowers my confidence.
I think it was good that you noticed your confusion! In this case, I believe your confusion primarily stems from misunderstanding the paragraph. Will is not saying that the worst that a company can do from the impartial point of view is go bankrupt. He’s saying that the worst that a company can do from a profit-maximizing perspective (“aiming to make profit”) is go bankrupt. Whereas (EA) charities are presumed to be judged from the impartial point of view, in which case it will be inappropriate to ignore the moral downsides.
To be clear, stating that I was confused was a polite way of indicating that I think this reasoning itself is confused. Why should we evaluate for-profit businesses only from a profit-maximizing perspective? Having profitability as the primary goal of enterprise doesn’t preclude that enterprise from doing massive amounts of harm. If a for-profit enterprise does harm in its attempts to make profit should we ignore that harm simply because they’ve succeeded in turning a profit? If you interpretation of Will’s reasoning is what he intended, then he is asking us compare aiming to do good and aiming to make profit by evaluating each on different criteria. Generally, this is a misleading way of making comparisons.
This is important because this sort of reasoning is used to justify an uncircumspect version of encouraging people to earn to give that I see coming from this community. As I’ve seen it, the argument goes that in your career your should focus on acquiring wealth rather than doing good because you can then use that wealth to do good by being an effective altruist. But this ignores that you can potentially do more harm in acquiring great wealth than you can compensate for by using your wealth altruistically.
As I mentioned, I think there are some good reasons to believe the money Bankman-Fried and Moskovitz are contributing to the EA community was acquired in ways that may have caused significant harm. This doesn’t mean the EA community should reject this money, and if the money is used judiciously it may well mean the Bankman-Fried and Moskovitz’s successes in acquiring wealth may will be net positives for humanity. But it is also not hard to think of examples where so much harm was done in acquiring the funding of a charitable organization even if the charity does a lot of good it won’t be a net positive. For example, consider the Sackler family, who in the process of acquiring their wealth, were happy to ignore that they were creating tens of thousands of opioid addicts. Their charitable work (mostly donating to already well-funded museums as far as I know) also probably also wouldn’t be evaluated well by the EA community. But the real harm happened even before they attempted to clean up their reputations through charitable donations.
In light of harms that can be caused in acquiring wealth, I think the EA community should be more circumspect about encouraging people to earn to give. Succeeding in business to maximize your positive impact through charitable work isn’t necessarily bad advice, but you do need account for the harms businesses can do.
The way I read it, Will is comparing the challenge of doing good to the challenge of earning money (for its own sake). Not, as you assume, to the challenge of doing good by earning money.
The point is that if we try to learn any lessons from people who are optimizing to earn money, we’ll need to keep in mind that we have reason to be more risk-averse than they are.
Yes, this is what I meant to say, but failed to communicate. Thank you for putting it more succinctly and politely than I could.
If someone from High-Impact Athletes writes a post outlining what we can learn from deliberate practice in professional sports, and mentions salient differences between how metrics work in sports training and EA training as a caveat to this advice, I do not think the correct takeaway is “this whole discussion is irrelevant to EA because sports are of ~0 impact for the world, and probably net negative anyway.”
What did I say warranted this comparison? I am not saying anything anyone has said is irrelevant to EA. I am saying argument Will made that for-profit businesses can’t do much harm because their worst outcome is bankruptcy is misleading. I am also saying that encouraging people to earn to give without considering that harm can by done by earning is one of the ways the EA movement could end up having a net negative impact.
At first I thought you genuinely misread his comment, and politely corrected you. And then your next comment (which did not thank me, or acknowledge the correction) suggested you deliberately misread it so you can get on your hobbyhorse about something else. So I tried to illustrate why this is not an appropriate strategy with an analogy. Perhaps my issue was that I being too metaphorical and insufficiently direct earlier. In that case I’m sorry for the lack of clarity in communication.
There’s several extremely bad outcomes of bad charities:
One example is in the footnotes and where people actually died[1].
Another famous example is a pumping system for developing countries that consumed donor money and actively made it more difficult to get water.
It’s not clear anything would have stopped these people besides their own virtue or self awareness, or some kind of press attention. The effect of first world faculty and wealth is overwhelming and can trample all kinds of safeguards (a American volunteer blew the whistle on the first case, a fact that is thought provoking and informative, after hundreds of children and actual doctors passed through the clinic).
In a real business, overwhelming effort is being made to make sure the business is successful. In the hundred trillion dollar world economy, almost no one is paying money to help or harm people.
For any given amount of money, you can do tremendous more harm and kill people with the meme of doing good, than running a business, even in situations where there aren’t many functioning institutions.
Here is one example that made it into NPR and the New Yorker, Guardian, etc.
I suspect the truth is more complicated than these articles suggest. I think being a woman, blonde, American and blogging/Instagram played a role in the reason this person is being read about on US national media—the implication I’m making is that this might be happening all the time.
It’s incredibly, deceptively hard to accomplish anything in very different cultures/economies/societies, much less cost effectively. Achieving this is possible, but rare and hard and undervalued.
My point wasn’t that charities are incapable of doing harm. There are many examples of charities doing harm as you point out. The point is that reasoning that non-profits have more potential to cause harm that for-profit seems to ignore that many for-profit enterprises operate at much larger scale than any non-profits and do tremendous amounts of harm.
Yes, most successful business are primarily focused on making profit rather doing good or harm. But this doesn’t mean they aren’t willing to do harm in the pursuit of profit! If someone dedicates an amount of money to business that then grows it large enough to do lots of harm, even if as a side-effect, it’s quite conceivable they could accomplish more total harm than someone simply dedicating that same money to a directly harmful (but not profitable) venture.
You’re absolutely right. For profits absolutely do harm. In general “capitalism has really huge harms”, almost every EA or reader here would agree (note that I’m not an EA necessarily or represent EA thought).
The scale is the point here—you’re also exactly right. For many activities, it makes many, many millions to create a situation where we are harming people.
To be tangible, it’s hard to think of any business you or I could setup that would be that harmful as posing as a fake charity and disrupting medical service or food supply.
Well, I’ve actually sort of slipped into another argument about scale and relative harm, and got you to talk about that.
But that doesn’t respond to your original point, that businesses can do huge harm and EA needs to account for that. So that’s unfair to you.
Trying to answer your point, and using your view about explicitly weighing and balancing harms, there’s another point about “counterfactual harm” that responds to a lot of your concerns.
In the case of a crypto currency company:
If you make a new make crypto company, and become successful by operating a new exchange, even if you become the world’s biggest exchange, it’s unclear how much that actually caused any more mining (e.g. by increasing Bitcoin’s price).
There’s dozens of exchanges already, besides the one you created. So it’s not true that you can assign or attribute 20% or 50% of emissions to money, just from association.
In reality, I think it’s reasonable that the effect is small, so even if the top #1 trading platform wasn’t founded, almost the same amount of mining would occur. (If you track cryptocurrency prices, it seems plausible that no one cares that much about the quality of exchanges).
So the money that would have gone to your platform and been donated to charity, would buy yachts for someone else instead.
(By the way—as part of your crypto currency company, if you make and promote a new cryptocurrency that doesn’t mine, and “stakes” instead, then your cryptocurrency company might accelerate the transition to “staking”, which doesn’t produce greenhouse gasses like mining. Your contribution to greenhouse gasses is negative despite being a crypto company. But I share the sentiment that you can totally roll your eyes at this idea, let’s just leave this point here.)
You mentioned other concerns about other companies. I think it’s too difficult for me to respond, for reasons that aren’t related to the merit of the concern.
Since we agree scale is a key part of this, I don’t know how you can be confident that an imagined fake charity that disrupts medical service or food supply would ever be large enough to equal scale the harms caused by some of the most powerful global corporations. In this era of extreme wealth inequality it’s plausible that some billionaire could accrue massive personal wealth and then transfer that wealth to a non-profit, which, freed from the shackles of having to make a profit, could focus solely on doing harm. But equally that billionaire could found an institution focused on turning a profit while doing harm and use the profits to grow the institution to a scale and concomitant harm that far exceeds what they would have been able to achieve with a non-profit.
For example, if we are going to imagine a fake that charity that disrupts delivery of a medical service, why don’t we imagine that they do this by acting as a middleman that extracts a profit by reselling medical supplies for an unconscionable profit. This profit, in turn enables them to grow and slip their “services” between more people and their medical providers. While this may seem like a criminal enterprise, for many companies that exist today this basically their business model, and they operate at scales that eclipse most medical non-profits I know of.
Being a non-profit does provide good cover for operating in harmful fashion, but growth through accumulation of capital is very power mechanism—I don’t think we should be surprised to find the largest harm causing institutions use it as their engi
But we’re talking about the relative harm of a bad new charity compared to a harmful business.
I think you agree it doesn’t make sense to compare the effect of our new charity versus, literally all of capitalism or a major global corporation.
Let’s be honest, we both know perfectly well, that your view and understanding of the world is that, if a business could make significantly more profit being evil, it would already be doing it. That niche would be filled. I probably agree.
But if that’s true, it must be that even an amoral business person could not make profit by doing the same evil—all the evil capitalists got there first. So there’s no evil super harm possible as described in your story.
Yes, basically, we sort of both agree this is happening.
The difference between our opinions is that, I think in healthy marketplaces, this profit seeking is extremely positive and saves lives (ugh, I sound like Kevin Murphy.)
Also, we both know that there’s not going to be any way to agree or prove eachother wrong or right about this specific issue.
And we’re really really far from the point here (and I think it’s better addressed by my other comment).