General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsHow Silicon Valley built AI: Buying, scanning and discarding millions of books
https://www.washingtonpost.com/technology/2026/01/27/anthropic-ai-scan-destroy-books/Within about a year, according to the filings, the company had spent tens of millions of dollars to acquire and slice the spines off millions of books, before scanning their pages to feed more knowledge into the AI models behind products such as its popular chatbot Claude.
-snip-
Books were viewed by the companies as a crucial prize, the court records show. In a January 2023 document, one Anthropic co-founder theorized that training AI models on books could teach them how to write well instead of mimicking low quality internet speak. A 2024 email inside Meta described accessing a digital trove of books as essential to being competitive with its AI rivals.
-snip-
On several occasions, Meta employees raised concerns in internal messages that downloading a collection of millions of books without permission would violate copyright law. In December 2023, an internal email said the practice had been approved after escalation to MZ, an apparent reference to CEO Mark Zuckerberg, according to filings in a copyright lawsuit brought by book authors against the company. Meta declined to comment for this story.
-snip-
Much more at the link.
To the best of my knowledge, there is no such thing as an ethical, legally trained generative AI model.
No such thing as an ethical genAI company.
No such thing as an ethical genAI tech executive, company owner/investor or staffer, including scientists, who knew of the intellectual property theft and went along with it.
The training of all these AI models involved the greatest theft of intellectual property ever.
If you're aware of that theft, you should NOT be using genAI voluntarily, or promoting its use, including by circulating what's produced by genAI - whethet it's text, images, video or music. Because if you do so, you're giving a thumbs-up to the theft, and to thieves who belong in prison.
I know some people are forced by their schools or jobs to use genAI. They should still point out that it's unethical, just as I hope they would if child labor or slavery was involved.
EDITING to link to two threads about the very appropriate reaction on Bluesky to a teacher's union head having foolishly posted AI slop she thought was "fun" -
American Federation of Teachers president thought AI slop would be "fun" to share on Bluesky. Big mistake.
https://www.democraticunderground.com/100220895596
If you support unions (DUers should) but still think it's OK to post AI slop, see the hundreds of Bluesky replies
https://www.democraticunderground.com/100220895856
PatSeg
(52,394 posts)highplainsdem
(60,566 posts)the government pressuring everyone to use their unethical AND badly flawed - hallucinating - tools.
PatSeg
(52,394 posts)Most people won't even realize it until it is too late.
highplainsdem
(60,566 posts)all been very aware it's theft. They're just counting on having skillful but unethical lawyers, unethical governments, and a lazy and largely unethical public who won't know or won't care about the IP theft and other harms from the genAI industry, as long as they find genAI even slightly useful or entertaining. And it's that need to make suckers, gullible users, fans of genAI that explains why companies losing money on genAI are still offering it for little or nothing. They're trying to create a situation where genAI companies are considered too important to regulate, and too big to fail so goverments should subsidize them and bail them out.
SheltieLover
(77,772 posts)highplainsdem
(60,566 posts)SheltieLover
(77,772 posts)My anger is def not directed at you.
highplainsdem
(60,566 posts)dalton99a
(92,594 posts)EdmondDantes_
(1,457 posts)Or is the ability to intake everything still a scope problem? Plenty of developers for example put code up on GitHub or StackOverflow and shared knowledge within the community freely. But AI can ingest that far faster than I can. Or is it because it can synthesize and spread all of that data to a scope that wasn't possible before even with the Internet? Movie companies/record companies made arguments about VCRs and tape recorders, but the ability to distribute is so great. An AI can buy a single copy of say every computer science book and then output all that learning to anyone/everyone.