Welcome to DU!
The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards.
Join the community:
Create a free account
Support DU (and get rid of ads!):
Become a Star Member
Latest Breaking News
Editorials & Other Articles
General Discussion
The DU Lounge
All Forums
Issue Forums
Culture Forums
Alliance Forums
Region Forums
Support Forums
Help & Search
General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsAI agents abound, unbound by rules or safety disclosures -The Register
AI agents are becoming more common and more capable, without consensus or standards on how they should behave, say academic researchers.
So says MITs Computer Science & Artificial Intelligence Laboratory (CSAIL), which analyzed 30 AI agents for its 2025 AI Agent Index, which assesses machine learning models that can take action online through their access to software services.
AI agents may take the form of chat applications with tools (Manus AI, ChatGPT Agent, Claude Code), browser-based agents (Perplexity Comet, ChatGPT Atlas, ByteDance Agent TARS), or enterprise workflow agents (Microsoft Copilot Studio, ServiceNow Agent).
The paper accompanying the AI Agent Index observes that despite growing interest and investment in AI agents, "key aspects of their real-world development and deployment remain opaque, with little information made publicly available to researchers or policymakers."
The AI community frenzy around open source agent platform OpenClaw, and its accompanying agent interaction network Moltbook plus ongoing frustration with AI-generated code submissions to open source projects underscores the consequences of letting agents loose without behavioral rules.
So says MITs Computer Science & Artificial Intelligence Laboratory (CSAIL), which analyzed 30 AI agents for its 2025 AI Agent Index, which assesses machine learning models that can take action online through their access to software services.
AI agents may take the form of chat applications with tools (Manus AI, ChatGPT Agent, Claude Code), browser-based agents (Perplexity Comet, ChatGPT Atlas, ByteDance Agent TARS), or enterprise workflow agents (Microsoft Copilot Studio, ServiceNow Agent).
The paper accompanying the AI Agent Index observes that despite growing interest and investment in AI agents, "key aspects of their real-world development and deployment remain opaque, with little information made publicly available to researchers or policymakers."
The AI community frenzy around open source agent platform OpenClaw, and its accompanying agent interaction network Moltbook plus ongoing frustration with AI-generated code submissions to open source projects underscores the consequences of letting agents loose without behavioral rules.
https://www.theregister.com/2026/02/20/ai_agents_abound_unbound_by/]
2 replies
= new reply since forum marked as read
Highlight:
NoneDon't highlight anything
5 newestHighlight 5 most recent replies
AI agents abound, unbound by rules or safety disclosures -The Register (Original Post)
justaprogressive
Saturday
OP
highplainsdem
(61,175 posts)1. From The Verge: The AI security nightmare is here
https://www.theverge.com/ai-artificial-intelligence/881574/cline-openclaw-prompt-injection-hack
Lockdown Mode isn't available to individual consumers, though:
https://openai.com/index/introducing-lockdown-mode-and-elevated-risk-labels-in-chatgpt/
Btw, the hacker had warned Cline about the security risk earlier, but they didn't fix it till he called them out in public.
A hacker tricked a popular AI coding tool into installing OpenClaw the viral, open-source AI agent OpenClaw that actually does things absolutely everywhere. Funny as a stunt, but a sign of what to come as more and more people let autonomous software use their computers on their behalf.
The hacker took advantage of a vulnerability in Cline, an open-source AI coding agent popular among developers, that security researcher Adnan Khan had surfaced just days earlier as a proof of concept. Simply put, Clines workflow used Anthropics Claude, which could be fed sneaky instructions and made to do things that it shouldnt, a technique known as a prompt injection.
The hacker used their access to slip through instructions to automatically install software on users computers. They could have installed anything, but they opted for OpenClaw. Fortunately, the agents were not activated upon installation, or this would have been a very different story.
Its a sign of how quickly things can unravel when AI agents are given control over our computers. They may look like clever wordplay one group wooed chatbots into committing crimes with poetry but in a world of increasingly autonomous software, prompt injections are massive security risks that are very difficult to defend against. Acknowledging this, some companies instead lock down what AI tools can do if theyre hijacked. OpenAI, for example, recently introduced a new Lockdown Mode for ChatGPT preventing it from giving your data away.
-snip-
The hacker took advantage of a vulnerability in Cline, an open-source AI coding agent popular among developers, that security researcher Adnan Khan had surfaced just days earlier as a proof of concept. Simply put, Clines workflow used Anthropics Claude, which could be fed sneaky instructions and made to do things that it shouldnt, a technique known as a prompt injection.
The hacker used their access to slip through instructions to automatically install software on users computers. They could have installed anything, but they opted for OpenClaw. Fortunately, the agents were not activated upon installation, or this would have been a very different story.
Its a sign of how quickly things can unravel when AI agents are given control over our computers. They may look like clever wordplay one group wooed chatbots into committing crimes with poetry but in a world of increasingly autonomous software, prompt injections are massive security risks that are very difficult to defend against. Acknowledging this, some companies instead lock down what AI tools can do if theyre hijacked. OpenAI, for example, recently introduced a new Lockdown Mode for ChatGPT preventing it from giving your data away.
-snip-
Lockdown Mode isn't available to individual consumers, though:
https://openai.com/index/introducing-lockdown-mode-and-elevated-risk-labels-in-chatgpt/
Btw, the hacker had warned Cline about the security risk earlier, but they didn't fix it till he called them out in public.
SheltieLover
(79,014 posts)2. Ty for sharing!