A difference between how human artists learn and AI models learn is that humans have their own experiences in the real world to draw from and combine these with the examples of other people’s art. Conversely, current AI models are trained exclusively on existing art and images and lack independent experiences.
It’s also well known that AI art models are frequently prompted to generate images in the style of particular artists like Greg Rutkowski, or more recently, Studio Ghibli. Human artists tend to develop their own style, and when they choose to deliberately copy someone else’s style, this is often looked down upon as forgeries. AI models seem to be especially good at stylistic forgeries, and it might be argued that, given the lack of original experiences to draw from, all AI art is essentially forgeries or mixtures of forgeries.
Can you cite a source for that? All I can find is that the First Amendment covers parody and to a lesser extent satire, which are different from pastiche.
Also, pastiche usually is an obvious homage and/or gives credit to the style’s origins. What AI art makers often do is use the name of a famous artist in the prompt to make an image in their style, and then not credit the artist when distributing the resulting image as their own. To me, even if this isn’t technically forgery (which would involve pretending this artwork was actually made by the famous artist), it’s still ethically questionable.
That link has to do with copyright. I will give you that pastiche isn’t a violation of copyright. Even outright forgeries don’t violate copyright. Forgeries are a type of fraud.
Again, pastiche in common parlance describes something that credits the original, usually by being an obvious homage. I consider AI art different from pastiche because it usually doesn’t credit the original in the same way. The Studio Ghibli example is an exception because it is very obvious, but for instance, the Greg Rutkowski prompted AI art is very often much harder to identify as such.
I admit this isn’t the same thing as a forgery, but it does seem like something unethical in the sense that you are not crediting the originator of the style. This may violate no laws, but it can still be wrong.
So I think we may have a crux—are “independent experiences” necessary for work to be transformative enough to make the use of existing art OK? If so, do the experiences of the human user(s) of AI count?
Here, I suspect Toby contributed to the Bulby image in a meaningful way; this is not something the AI would have generated itself or on bland, generic instructions. To be sure, the AI did more to produce this masterpiece than a camera does to produce a photograph—but did Toby do significantly less than the minimum we would expect from a human photographer to classify the output as human art? (I don’t mean to imply we should treat Bulby as human art, only as art with a human element.)
That people can prompt an AI to generate art in a way that crosses the line of so-called “stylistic forgeries” doesn’t strike me as a good reason to condemn all AI art output. It doesn’t undermine the idea that an artist whose work is only a tiny, indirect influence on another artist’s work has not suffered a cognizable injury because that is inherent in how culture is transmitted and developed. Rather, I think the better argument there is that too much copying from a particular source makes the output not transformative enough.
You could argue that Toby’s contribution is more what the commissioner of an artwork does than what an artist does.
On the question of harm, a human artist can compete with another human artist, but that’s just one artist, with limited time and resources. An AI art model could conceivably be copied extensively and used en masse to put all or many artists out of work, which seems like a much greater level of harm possible.
A difference between how human artists learn and AI models learn is that humans have their own experiences in the real world to draw from and combine these with the examples of other people’s art. Conversely, current AI models are trained exclusively on existing art and images and lack independent experiences.
It’s also well known that AI art models are frequently prompted to generate images in the style of particular artists like Greg Rutkowski, or more recently, Studio Ghibli. Human artists tend to develop their own style, and when they choose to deliberately copy someone else’s style, this is often looked down upon as forgeries. AI models seem to be especially good at stylistic forgeries, and it might be argued that, given the lack of original experiences to draw from, all AI art is essentially forgeries or mixtures of forgeries.
Stylistic pastiche is unambiguously protected by the First Amendment, not “forgery”.
Can you cite a source for that? All I can find is that the First Amendment covers parody and to a lesser extent satire, which are different from pastiche.
Also, pastiche usually is an obvious homage and/or gives credit to the style’s origins. What AI art makers often do is use the name of a famous artist in the prompt to make an image in their style, and then not credit the artist when distributing the resulting image as their own. To me, even if this isn’t technically forgery (which would involve pretending this artwork was actually made by the famous artist), it’s still ethically questionable.
This is more a copyright law question than a First Amendment one, at least under current law. E.g., https://www.trails.umd.edu/news/ai-imitating-artist-style-drives-call-to-rethink-copyright-law.
I believe whether the 1A requires this outcome is unclear at present. Of course, there’s a lot of activity protected by the 1A that is horrible to do.
https://law.stackexchange.com/questions/98968/are-art-styles-subject-to-ip-protection
That link has to do with copyright. I will give you that pastiche isn’t a violation of copyright. Even outright forgeries don’t violate copyright. Forgeries are a type of fraud.
Again, pastiche in common parlance describes something that credits the original, usually by being an obvious homage. I consider AI art different from pastiche because it usually doesn’t credit the original in the same way. The Studio Ghibli example is an exception because it is very obvious, but for instance, the Greg Rutkowski prompted AI art is very often much harder to identify as such.
I admit this isn’t the same thing as a forgery, but it does seem like something unethical in the sense that you are not crediting the originator of the style. This may violate no laws, but it can still be wrong.
So I think we may have a crux—are “independent experiences” necessary for work to be transformative enough to make the use of existing art OK? If so, do the experiences of the human user(s) of AI count?
Here, I suspect Toby contributed to the Bulby image in a meaningful way; this is not something the AI would have generated itself or on bland, generic instructions. To be sure, the AI did more to produce this masterpiece than a camera does to produce a photograph—but did Toby do significantly less than the minimum we would expect from a human photographer to classify the output as human art? (I don’t mean to imply we should treat Bulby as human art, only as art with a human element.)
That people can prompt an AI to generate art in a way that crosses the line of so-called “stylistic forgeries” doesn’t strike me as a good reason to condemn all AI art output. It doesn’t undermine the idea that an artist whose work is only a tiny, indirect influence on another artist’s work has not suffered a cognizable injury because that is inherent in how culture is transmitted and developed. Rather, I think the better argument there is that too much copying from a particular source makes the output not transformative enough.
You could argue that Toby’s contribution is more what the commissioner of an artwork does than what an artist does.
On the question of harm, a human artist can compete with another human artist, but that’s just one artist, with limited time and resources. An AI art model could conceivably be copied extensively and used en masse to put all or many artists out of work, which seems like a much greater level of harm possible.