Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

highplainsdem

(61,661 posts)
Fri Mar 13, 2026, 10:34 PM 20 hrs ago

People who know more about AI art find it less ethical (Scientific American, March 6, 2026)

https://www.scientificamerican.com/article/people-who-know-more-about-ai-art-find-it-less-ethical/

People who know more about AI art find it less ethical

When people understand the system and process behind AI art, its moral implications become harder to accept

By Ionela Bara

-snip-

A viewer’s comfort with AI art, however, may depend on how much they know about how it’s made. I study neuroaesthetics, a field that combines neuroscience, psychology and our perception of beauty and art. My colleagues and I have found that the more people learn about how AI’s back end works—the datasets, training process, prompting—the less comfortable they are with the moral considerations surrounding these creations and the value of AI-generated pieces.

-snip-

In the first experiment, we showed our participants 20 landscapes and 20 portraits that were generated using DALL-E 3 with prompts based on the Impressionist art of the Spanish painter Joaquín Sorolla. Half of the participants viewed this AI art with no added context. The other half received a short text that gave them more information. It read: “This image was generated by an AI algorithm that produces images from textual descriptors. To accomplish that, several steps are required. First, the AI algorithm is trained by learning a large dataset of art images and their corresponding text descriptors, such as the artist’s name. Then, the AI algorithm is able to generate new images based on different textual prompts (e.g., artist's name, artistic style, whether it depicts a seascape, landscape, or people).”

The additional information made a difference. When people knew how the AI system operated, they perceived the AI art images as less morally acceptable, especially when the creation of these images involved financial gain and artistic acclaim. But the aesthetic appeal of the images did not change, suggesting that learning how AI works made people reflect on ethics, not aesthetics.

Psychologists have found that people’s judgments about what is good or valuable can change when they learn something has earned awards or praise from experts. The authority bias, for example, makes us more inclined to agree with people who seem to be in charge or in the know. In addition, cues such as success or prestige can lead people to see something as more morally good. In our second study, we told a group of participants that some of the AI art images had been exhibited, sold or praised. But we were surprised to find that sharing a work’s success did not improve the moral acceptability of these images in the eyes of people who had learned about how these works are created.

-snip-


More at the link.

The third experiment had people who did not know how AI art was created make snap judgments about it, which confirmed that people don't have a negative reaction to just the label, to simply being told it's AI. The negative reaction to AI art is a result of knowing how it's created.

What was surprising to me was how little these people had to learn about how AI art is created to have a lower opinion of it. They didn't need to be told, for instance, that the images the AI was trained on were stolen, with the art owners' watermarks often regurgitated by the AI. They didn't need to be told so much AI art was being churned out that AI art copying famous artists often showed up high in search results for those artists' names.

They just understood there was something inherently wrong with people generating AI art from a machine trained on other people's work, real artists' work. That it lacked "moral acceptability" - and even being told that AI art had been exhibited, sold and praised didn't make it more morally acceptable.
6 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
People who know more about AI art find it less ethical (Scientific American, March 6, 2026) (Original Post) highplainsdem 20 hrs ago OP
I don't trust ai Faux pas 20 hrs ago #1
There are lots of reasons not to. highplainsdem 19 hrs ago #3
As far as I'm concerned, the environmental cost of producing Tanuki 20 hrs ago #2
I agree. highplainsdem 18 hrs ago #4
I am normally not in favor of more government regulations restricting freedom PCB66 8 hrs ago #5
OpenAI actually polled users a couple of years ago to ask if they would be OK with their AI "creations" highplainsdem 8 hrs ago #6

Tanuki

(16,424 posts)
2. As far as I'm concerned, the environmental cost of producing
Fri Mar 13, 2026, 10:47 PM
20 hrs ago

something so trivial and ephemeral is also a moral issue.

PCB66

(112 posts)
5. I am normally not in favor of more government regulations restricting freedom
Sat Mar 14, 2026, 10:08 AM
8 hrs ago

However, I would love to see a national law requiring that all AI content be labeled as such.

highplainsdem

(61,661 posts)
6. OpenAI actually polled users a couple of years ago to ask if they would be OK with their AI "creations"
Sat Mar 14, 2026, 10:41 AM
8 hrs ago

being labeled as AI, and the response was an overwhelming NO. A lot of people using AI want to pretend they're smarter or more talented than they are.

A number of AI companies will mark what's generated by the free tier of their products as having been produced by that AI tool, but typically paying customers are allowed to remove that label or watermark. So labeling the freebies as AI is both a way to turn the freebie into advertising for the AI company, and an inducement to get those free users to pay so they can try to hide the fact that they're using AI.

They haven't found any good way to label text, either. And people using AI for images but wanting to pretend they didn't will often crop out the watermark.

Latest Discussions»General Discussion»People who know more abou...