This seems like an area in which utilitarianism should be bounded by deontology.
I’m confused because I think precisely the opposite is true.
If I were applying a deontological framework to automation, I’d perhaps first point out that there’s an act/omission asymmetry in using technology. By using AI art generators, you’re not actually harming anyone directly. You’re just using a fun tool to generate images. What you’re calling “harm” is in fact the omission of payments given to artists whose images were used during training, which deontology views quite differently.
While it’s true that most deontologists believe in things like property rights, and compensation for work, I am not familiar with any deontological theory that says we are obligated to compensate people who we merely learn from, unless that is part of some explicit contract.
By contrast, the only very plausible argument I can imagine for why we should compensate artists for AI art is utilitarian. That is, providing compensation to artists would offset the implicit harm to their profession, redistributing economic surplus from consumers and producers of AI art, to artists who lose out from competition.
Such compensation is often recommended by economists as part of a package for turning Kaldor-Hicks improvements into Pareto improvements, but I have yet to hear of such a proposal from strict deontologists before. Have you?
In that case, perhaps instead of phrasing it as “utilitarianism should be bounded by deontology” I should have instead phrased it as something along the lines of “a large benefit from this system doesn’t justify the harms of creating this system.” The general idea that I am trying to gesture toward is that when the piece of art someone created is used in a way that they do not consent to, the use benefiting someone doesn’t necessarily make it okay. So while the value might be −1 over here and +50 over there, I (as a layperson, rather than a law maker) don’t think that should be used as justification. If the creator gives informed consent, then I think it sounds fine. I know that I would feel really shitty if I spent time making something, sold copies of it, and then found that someone had copied my creation and was distributing variations on it for free.
Perhaps one area where I wasn’t clear is that rather than a profession simply fading away (such as those made obsolete by the invention of digital spreadsheets or those made obsolete by the invention of automobiles), the “harm” I am referring to an artists work being copied without his/her permission (or stolen, or used without consent, or pirated). So perhaps I’ve mis-understood your perspective here. I understood your perspective to be “The value from a really good entertainment generation system would be so large that it would be justified to not pay the artists for their work.” But perhaps when you referred to lost income you meant the future of their profession, rather than simply not paying for their work?
Such compensation is often recommended by economists as part of a package for turning Kaldor-Hicks improvements into Pareto improvements, but I have yet to hear of such a proposal from strict deontologists before. Have you?
No, I have not heard of such a proposal from a strict deontologist. But to my knowledge I’ve also never had any interaction with a strict deontologist. 😅
EDIT: My views are probably quite influenced by recently learning about Lensa scraping artists work without their consent. If I hadn’t learned about that, then I probably wouldn’t have even thought about the ethics of what goes into a content generation system.
So while the value might be −1 over here and +50 over there, I (as a layperson, rather than a law maker) don’t think that should be used as justification.
A 50:1 benefit:cost ratio is huge! Even fairly hardcore libertarians will accept policies that have such massively positive consequences. A typical policy is more likely to be −1 over here, +1.1 over there. If you’re not willing to enact a policy like this I think there is basically no policy ever that will satisfy you.
I agree that in the abstract a 50:1 benefit:cost ratio sounds great. But it also strikes me as naïve utilitarianism (although maybe I am using that term wrong?). To make it more concrete:
If you have a book that you enjoy reading, can I steal it and copy it and share it with 50 of my friends?
Is you stealing $100 from me justified if it generates far greater value when you donate that $100 to other people?
If we can save 50 lives by killing John Doe and harvesting his organs, does that justify the act?
If I can funnel millions or billions of dollars toward highly effective charities by lying to or otherwise misleading investors, does that benefit justify the cost?
These are, of course, simplistic examples and analogies, rather than some sort of rock solid thesis. And this isn’t a dissertation that I’ve thought out well; this is mostly impulse and gut feeling on my part, so maybe after a lot of thought and reading on the topic I’ll feel very differently. So I’d encourage you to look at this as my fuzzy explorations/musings rather than as some kind of confident stance.
Any maybe that 50:1 example is so extreme that some things that would normally be abhorrent do actually make sense. Maybe the benefit of pirating an ebook (in which one person has their property stolen and thousands of people benefit from it) is so large that it is morally justified. So perhaps for my example should have chosen a more modest ratio, like 5:1. 😅
I’ll also note that I think I tend to learn a bit toward negative utilitarianism, so I prioritize avoiding harm a bit more than I prioritize causing good. I think this makes me have a fairly high bar for these kinds of the ends justify the means scenarios.
I’m confused because I think precisely the opposite is true.
If I were applying a deontological framework to automation, I’d perhaps first point out that there’s an act/omission asymmetry in using technology. By using AI art generators, you’re not actually harming anyone directly. You’re just using a fun tool to generate images. What you’re calling “harm” is in fact the omission of payments given to artists whose images were used during training, which deontology views quite differently.
While it’s true that most deontologists believe in things like property rights, and compensation for work, I am not familiar with any deontological theory that says we are obligated to compensate people who we merely learn from, unless that is part of some explicit contract.
By contrast, the only very plausible argument I can imagine for why we should compensate artists for AI art is utilitarian. That is, providing compensation to artists would offset the implicit harm to their profession, redistributing economic surplus from consumers and producers of AI art, to artists who lose out from competition.
Such compensation is often recommended by economists as part of a package for turning Kaldor-Hicks improvements into Pareto improvements, but I have yet to hear of such a proposal from strict deontologists before. Have you?
In that case, perhaps instead of phrasing it as “utilitarianism should be bounded by deontology” I should have instead phrased it as something along the lines of “a large benefit from this system doesn’t justify the harms of creating this system.” The general idea that I am trying to gesture toward is that when the piece of art someone created is used in a way that they do not consent to, the use benefiting someone doesn’t necessarily make it okay. So while the value might be −1 over here and +50 over there, I (as a layperson, rather than a law maker) don’t think that should be used as justification. If the creator gives informed consent, then I think it sounds fine. I know that I would feel really shitty if I spent time making something, sold copies of it, and then found that someone had copied my creation and was distributing variations on it for free.
Perhaps one area where I wasn’t clear is that rather than a profession simply fading away (such as those made obsolete by the invention of digital spreadsheets or those made obsolete by the invention of automobiles), the “harm” I am referring to an artists work being copied without his/her permission (or stolen, or used without consent, or pirated). So perhaps I’ve mis-understood your perspective here. I understood your perspective to be “The value from a really good entertainment generation system would be so large that it would be justified to not pay the artists for their work.” But perhaps when you referred to lost income you meant the future of their profession, rather than simply not paying for their work?
No, I have not heard of such a proposal from a strict deontologist. But to my knowledge I’ve also never had any interaction with a strict deontologist. 😅
EDIT: My views are probably quite influenced by recently learning about Lensa scraping artists work without their consent. If I hadn’t learned about that, then I probably wouldn’t have even thought about the ethics of what goes into a content generation system.
A 50:1 benefit:cost ratio is huge! Even fairly hardcore libertarians will accept policies that have such massively positive consequences. A typical policy is more likely to be −1 over here, +1.1 over there. If you’re not willing to enact a policy like this I think there is basically no policy ever that will satisfy you.
I agree that in the abstract a 50:1 benefit:cost ratio sounds great. But it also strikes me as naïve utilitarianism (although maybe I am using that term wrong?). To make it more concrete:
If you have a book that you enjoy reading, can I steal it and copy it and share it with 50 of my friends?
Is you stealing $100 from me justified if it generates far greater value when you donate that $100 to other people?
If we can save 50 lives by killing John Doe and harvesting his organs, does that justify the act?
If I can funnel millions or billions of dollars toward highly effective charities by lying to or otherwise misleading investors, does that benefit justify the cost?
These are, of course, simplistic examples and analogies, rather than some sort of rock solid thesis. And this isn’t a dissertation that I’ve thought out well; this is mostly impulse and gut feeling on my part, so maybe after a lot of thought and reading on the topic I’ll feel very differently. So I’d encourage you to look at this as my fuzzy explorations/musings rather than as some kind of confident stance.
Any maybe that 50:1 example is so extreme that some things that would normally be abhorrent do actually make sense. Maybe the benefit of pirating an ebook (in which one person has their property stolen and thousands of people benefit from it) is so large that it is morally justified. So perhaps for my example should have chosen a more modest ratio, like 5:1. 😅
I’ll also note that I think I tend to learn a bit toward negative utilitarianism, so I prioritize avoiding harm a bit more than I prioritize causing good. I think this makes me have a fairly high bar for these kinds of the ends justify the means scenarios.