General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsWhen AI Use Makes You Uncool (The Chronicle of Higher Education, 4/6)
I posted a thread Sunday
https://www.democraticunderground.com/100221151067
about some college students becoming so dependent on AI that they have to ask AI for help just to talk in a classroom. That CNN article was published April 4.
This article in the Chronicle of Higher Education came out two days later, with a different focus - this one on how use of AI is affecting students' opinions of, and trust in, other students.
-snip-
Madelyn Rowley has a hard time viewing classmates who use generative AI to complete their work as her intellectual and academic peers.
-snip-
Students tell Watkins, assistant director of academic innovation, that its hard for them to ignore classmates flitting between ChatGPT windows and programs that make that text sound more human before submitting assignments, he said. They also lament that they are paying for a human being to work with them, not for an AI chatbot that they can use for free.
-snip-
Im glad that I wasnt using it in my freshman year because building that foundational knowledge is important without the use of AI, said Kamya Raman, whos studying applied math and computer science at Brown. Thats something that you cant really replace. It would definitely affect the foundations of your learning if you were to shortcut that initial learning curve.
-snip-
This Chronicle article referred to another news story, from The Conversation last July:
https://theconversation.com/university-students-feel-anxious-confused-and-distrustful-about-ai-in-the-classroom-and-among-their-peers-258665
These experiences feel unfair and uncomfortable for students. They can report their classmates for academic integrity violations and enter yet another zone in which distrust mounts or they can try to work with them, sometimes with resentment. It ends up being more work for me, a political science major said, because its not only me doing my work by myself, its me double checking yours.
EdmondDantes_
(1,836 posts)highplainsdem
(62,342 posts)to pretend they did something.
I'm glad there are students who realize that, despite all the hype and pressure from the AI industry, their paid shills, and any others they've been able to delude or intimidate into thinking AI use is ever necessary, let alone worth all the harm AI does.
msongs
(73,806 posts)highplainsdem
(62,342 posts)Ilikepurple
(709 posts)I think most collaborative projects, both academic and professional, are behind me, but it was one thing to have to trust the integrity of your colleagues but also the veracity of their tools and resultant work. The techbro theory that its better to test technologies in the real world early, rather than sit on them while gauging the limits of their function, especially as applied to AI. AI is probably going to go through some painful, perhaps catastrophic, growing pains where its ultimate use is far more limited and less practical when applied over time and industry wide. The pitfalls of cutting corners early are sometime too deep to fix as you go. Im not just afraid for academics and the lack of development of critical thinking, but the both the localized and global effects of an increase in tolerance for mistakes that could have been avoided in the various professional disciplines.
highplainsdem
(62,342 posts)the competition than testing their AI in the real world. And they found a perfect early market with students who wouldn't be likely to notice its mistakes.
Now the AI industry has spent hundreds of billions of dollars on a flawed type of AI that will probably always hallucinate. And they hope to get everyone to shift the blame for their AI's failures to humans prompting it wrong or not catching its mistakes.
Ilikepurple
(709 posts)Companies generally do this to cut corners, to get a competitive advantage, or to attract capital investment. I think all three are at play in the AI world. In my opinion, our legislators and judges have been proven to be behind the advance of novel technological industries or even product distribution methods. They often rely on the industries to set their own standards. Something I dont agree with, but Im a pro regulation sort of person. Not only is the AI the product untested, the systems designed to cover its shortcomings are also. Its amazing the whole world jumped in head first mainly in fear of being left behind. Ive heard some individual success stories in utilizing AI and am sure it will be helpful, ethics aside, in some applications, but Im afraid itll do a lot of damage before its effects on both work product and worker competency are properly considered.
Bettie
(19,745 posts)they are all very much "I can do it myself or learn to, don't need AI".
highplainsdem
(62,342 posts)Bettie
(19,745 posts)but, he's also a cranky old guy who thinks that people should think on their own and is annoyed by "AI Slop" videos especially.
I just find it....useless. I tried to use one to make a D&D character image for a game (I like to have a basic image of a character) and it came up with things that were so wrong it was both funny and sad. I went back to Pinterest to find an inspiration image, which works just fine, because we all usually have pictures for the first session to get an idea of the characters....boy, I just read that and it makes me sound like the biggest dork in the universe.
Yeah, that's probably true.
Oh, my kids also never really used social media except for Discord....and they are 25, 24, and 17.
highplainsdem
(62,342 posts)threats) and AI slop is incredibly annoying.
And you're not a dork for liking D&D. I never got into it myself but a number of my friends did. One unfortunately to the point where it his hurt his grades for a while, but he got his addiction under control.
Your kids sound very smart about social media as well as AI.
WhiskeyGrinder
(26,981 posts)in conversation.