Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

highplainsdem

(61,950 posts)
Wed Mar 25, 2026, 09:12 PM 15 hrs ago

Delve did the security compliance on LiteLLM, an AI project hit by malware

From TechCrunch today:

https://techcrunch.com/2026/03/25/delve-did-the-security-compliance-on-litellm-an-ai-project-hit-by-malware/

-snip-

LiteLLM gives developers easy access to hundreds of AI models and provides features like spend management. It’s a breakout hit, downloaded as often as 3.4 million times per day, according to Snyk, one of the many security researchers monitoring the incident. The project had 40K stars on GitHub and thousands of forks (those who used it as a base to alter and make it their own).

The malware was discovered, documented, and disclosed by research scientist Callum McMahon of FutureSearch, a company offering AI agents for web research. The malware slipped in through a “dependency,” meaning other open source software that LiteLLM relied upon. It then stole the log-in credentials of everything it touched. With those credentials, the malware gained access to more open source packages and accounts to harvest more credentials, and so on.

-snip-

Delve is the Y-Combinator AI-powered compliance startup that’s been accused of misleading its customers about their true compliance conformity by allegedly generating fake data, and using auditors that rubber stamp reports. Delve has denied these allegations.

There is one point of nuance here worth understanding. Such certifications are intended to show that a company has strong security policies in place to limit the possibility of incidents like this one. Certifications don’t automatically prevent a company, like LiteLLM, from being hit by malware. While SOC 2 is supposed to cover policies surrounding software dependencies, malware can still slip in.

-snip-



Since LiteLLM is so popular, it's possible some DUers downloaded it.


Cybernews story on the malware infecting LiteLLM:

https://cybernews.com/security/critical-litellm-supply-chain-attack-sends-shockwaves/

Critical Python supply chain compromise: how library used by millions of AI developers got infected with malware
Published: 25 March 2026
Last updated: 4 hours ago

Ernestas Naprys
Senior Journalist


Developers are sounding the alarm bells. If you installed LiteLLM 1.82.7 or 1.82.8, immediately rotate everything: all secrets, every environment variable, SSH key, cloud credential, and API keys present on the system, security researchers warn. You might not even know that you use these packages – they often come as dependencies with major AI projects.

AI developers across the world report that their machines suddenly started behaving strangely.

-snip-

It’s like a universal adapter allowing you to control LLMs, AI agents, and MCP tools from one place.

This means attackers obtained highly valuable API keys and credentials that could cause significant losses. Moreover, it opens the door to many other repositories that depend on LiteLLM, allowing attackers to snowball the attack even further.

-snip-



The TechCrunch story 3 days ago about Delve allegedly falsely telling customers they were compliant with privacy and security regulations:

https://techcrunch.com/2026/03/22/delve-accused-of-misleading-customers-with-fake-compliance/

An anonymous Substack post published this week accuses compliance startup Delve of “falsely” convincing “hundreds of customers they were compliant” with privacy and security regulations, potentially exposing those customers to “criminal liability under HIPAA and hefty fines under GDPR.”

Delve is a Y Combinator-backed startup that last year announced raising a $32 million Series A at a $300 million valuation. (The round was led by Insight Partners.) On Friday, the startup attempted to refute the accusations on its blog, calling the Substack post “misleading” and saying it “contains a number of inaccurate claims.”

-snip-

Their conclusion? That Delve “achieves its claim of being the fastest platform by producing fake evidence, generating auditor conclusions on behalf of certification mills that rubber stamp reports, and skipping major framework requirements while telling clients they have achieved 100% compliance.”

DeepDelver went into considerable detail about those claims, accusing the startup of providing customers with “fabricated evidence of board meetings, tests, and processes that never happened,” then forcing those customers to “choose between adopting fake evidence or performing mostly manual work with little real automation or AI.”

-snip-
4 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies

erronis

(23,815 posts)
1. Yup - this is a masssive breach. And continuing to spread. These "AI" geniuses are really dolts.
Wed Mar 25, 2026, 09:23 PM
15 hrs ago

Trying to build up so quickly they totally avoid normal security protocols.

Unfortunately this one will hit a lot of open-source projects that have jumped on the stupid AI/LLM bandwagon.

highplainsdem

(61,950 posts)
2. Did you notice the paragraph in the first TechCrunch article about AI having been used to write the malware?
Wed Mar 25, 2026, 09:35 PM
15 hrs ago

It was such sloppy code that experts think it must have been vibe coding using AI.

erronis

(23,815 posts)
3. I didn't, but it's reported that the malware authors are using AI generated scripts to rapidly probe targets.
Wed Mar 25, 2026, 09:45 PM
15 hrs ago

Makes me think of the very early days when Robert Morris's "worm" (1988) caused talk of viruses and other biological analogs that might infect our computers/systems/networks.

The unleashing of these evolving and dynamic agents against corporate, military systems is a natural evolution. And defense is always harder especially when the aggressor can change modes so quickly.

Fun times!

Hugin

(37,838 posts)
4. I am so glad I moved on from any AI anything.
Thu Mar 26, 2026, 12:56 AM
12 hrs ago

My biggest problem these days is fat batteries.

Latest Discussions»General Discussion»Delve did the security co...