General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsDelve did the security compliance on LiteLLM, an AI project hit by malware
From TechCrunch today:
https://techcrunch.com/2026/03/25/delve-did-the-security-compliance-on-litellm-an-ai-project-hit-by-malware/
LiteLLM gives developers easy access to hundreds of AI models and provides features like spend management. Its a breakout hit, downloaded as often as 3.4 million times per day, according to Snyk, one of the many security researchers monitoring the incident. The project had 40K stars on GitHub and thousands of forks (those who used it as a base to alter and make it their own).
The malware was discovered, documented, and disclosed by research scientist Callum McMahon of FutureSearch, a company offering AI agents for web research. The malware slipped in through a dependency, meaning other open source software that LiteLLM relied upon. It then stole the log-in credentials of everything it touched. With those credentials, the malware gained access to more open source packages and accounts to harvest more credentials, and so on.
-snip-
Delve is the Y-Combinator AI-powered compliance startup thats been accused of misleading its customers about their true compliance conformity by allegedly generating fake data, and using auditors that rubber stamp reports. Delve has denied these allegations.
There is one point of nuance here worth understanding. Such certifications are intended to show that a company has strong security policies in place to limit the possibility of incidents like this one. Certifications dont automatically prevent a company, like LiteLLM, from being hit by malware. While SOC 2 is supposed to cover policies surrounding software dependencies, malware can still slip in.
-snip-
Since LiteLLM is so popular, it's possible some DUers downloaded it.
Cybernews story on the malware infecting LiteLLM:
https://cybernews.com/security/critical-litellm-supply-chain-attack-sends-shockwaves/
Published: 25 March 2026
Last updated: 4 hours ago
Ernestas Naprys
Senior Journalist
Developers are sounding the alarm bells. If you installed LiteLLM 1.82.7 or 1.82.8, immediately rotate everything: all secrets, every environment variable, SSH key, cloud credential, and API keys present on the system, security researchers warn. You might not even know that you use these packages they often come as dependencies with major AI projects.
AI developers across the world report that their machines suddenly started behaving strangely.
-snip-
Its like a universal adapter allowing you to control LLMs, AI agents, and MCP tools from one place.
This means attackers obtained highly valuable API keys and credentials that could cause significant losses. Moreover, it opens the door to many other repositories that depend on LiteLLM, allowing attackers to snowball the attack even further.
-snip-
The TechCrunch story 3 days ago about Delve allegedly falsely telling customers they were compliant with privacy and security regulations:
https://techcrunch.com/2026/03/22/delve-accused-of-misleading-customers-with-fake-compliance/
Delve is a Y Combinator-backed startup that last year announced raising a $32 million Series A at a $300 million valuation. (The round was led by Insight Partners.) On Friday, the startup attempted to refute the accusations on its blog, calling the Substack post misleading and saying it contains a number of inaccurate claims.
-snip-
Their conclusion? That Delve achieves its claim of being the fastest platform by producing fake evidence, generating auditor conclusions on behalf of certification mills that rubber stamp reports, and skipping major framework requirements while telling clients they have achieved 100% compliance.
DeepDelver went into considerable detail about those claims, accusing the startup of providing customers with fabricated evidence of board meetings, tests, and processes that never happened, then forcing those customers to choose between adopting fake evidence or performing mostly manual work with little real automation or AI.
-snip-
erronis
(23,815 posts)Trying to build up so quickly they totally avoid normal security protocols.
Unfortunately this one will hit a lot of open-source projects that have jumped on the stupid AI/LLM bandwagon.
highplainsdem
(61,950 posts)It was such sloppy code that experts think it must have been vibe coding using AI.
erronis
(23,815 posts)Makes me think of the very early days when Robert Morris's "worm" (1988) caused talk of viruses and other biological analogs that might infect our computers/systems/networks.
The unleashing of these evolving and dynamic agents against corporate, military systems is a natural evolution. And defense is always harder especially when the aggressor can change modes so quickly.
Fun times!
Hugin
(37,838 posts)My biggest problem these days is fat batteries.