General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsPeople who know more about AI art find it less ethical (Scientific American, March 6, 2026)
https://www.scientificamerican.com/article/people-who-know-more-about-ai-art-find-it-less-ethical/When people understand the system and process behind AI art, its moral implications become harder to accept
By Ionela Bara
-snip-
A viewers comfort with AI art, however, may depend on how much they know about how its made. I study neuroaesthetics, a field that combines neuroscience, psychology and our perception of beauty and art. My colleagues and I have found that the more people learn about how AIs back end worksthe datasets, training process, promptingthe less comfortable they are with the moral considerations surrounding these creations and the value of AI-generated pieces.
-snip-
In the first experiment, we showed our participants 20 landscapes and 20 portraits that were generated using DALL-E 3 with prompts based on the Impressionist art of the Spanish painter Joaquín Sorolla. Half of the participants viewed this AI art with no added context. The other half received a short text that gave them more information. It read: This image was generated by an AI algorithm that produces images from textual descriptors. To accomplish that, several steps are required. First, the AI algorithm is trained by learning a large dataset of art images and their corresponding text descriptors, such as the artists name. Then, the AI algorithm is able to generate new images based on different textual prompts (e.g., artist's name, artistic style, whether it depicts a seascape, landscape, or people).
The additional information made a difference. When people knew how the AI system operated, they perceived the AI art images as less morally acceptable, especially when the creation of these images involved financial gain and artistic acclaim. But the aesthetic appeal of the images did not change, suggesting that learning how AI works made people reflect on ethics, not aesthetics.
Psychologists have found that peoples judgments about what is good or valuable can change when they learn something has earned awards or praise from experts. The authority bias, for example, makes us more inclined to agree with people who seem to be in charge or in the know. In addition, cues such as success or prestige can lead people to see something as more morally good. In our second study, we told a group of participants that some of the AI art images had been exhibited, sold or praised. But we were surprised to find that sharing a works success did not improve the moral acceptability of these images in the eyes of people who had learned about how these works are created.
-snip-
More at the link.
The third experiment had people who did not know how AI art was created make snap judgments about it, which confirmed that people don't have a negative reaction to just the label, to simply being told it's AI. The negative reaction to AI art is a result of knowing how it's created.
What was surprising to me was how little these people had to learn about how AI art is created to have a lower opinion of it. They didn't need to be told, for instance, that the images the AI was trained on were stolen, with the art owners' watermarks often regurgitated by the AI. They didn't need to be told so much AI art was being churned out that AI art copying famous artists often showed up high in search results for those artists' names.
They just understood there was something inherently wrong with people generating AI art from a machine trained on other people's work, real artists' work. That it lacked "moral acceptability" - and even being told that AI art had been exhibited, sold and praised didn't make it more morally acceptable.
Faux pas
(16,317 posts)at all.
highplainsdem
(61,661 posts)Tanuki
(16,424 posts)something so trivial and ephemeral is also a moral issue.
highplainsdem
(61,661 posts)PCB66
(112 posts)However, I would love to see a national law requiring that all AI content be labeled as such.
highplainsdem
(61,661 posts)being labeled as AI, and the response was an overwhelming NO. A lot of people using AI want to pretend they're smarter or more talented than they are.
A number of AI companies will mark what's generated by the free tier of their products as having been produced by that AI tool, but typically paying customers are allowed to remove that label or watermark. So labeling the freebies as AI is both a way to turn the freebie into advertising for the AI company, and an inducement to get those free users to pay so they can try to hide the fact that they're using AI.
They haven't found any good way to label text, either. And people using AI for images but wanting to pretend they didn't will often crop out the watermark.