[agents/model-providers] [xai-auth] bootstrap config fallback: no config-backed key found
title: "Intel Brief: Mercor AI — LiteLLM Supply Chain Attack Breach" date: 2026-04-05 slug: mercor-ai-breach-4tb-stolen-data
Intel Brief: Mercor AI — LiteLLM Supply Chain Attack Breach
Mercor, an AI recruitment startup serving as contractor for OpenAI and Anthropic, confirmed a major data breach resulting from a supply chain attack on LiteLLM, an open-source tool enabling communication between different AI models. Hackers linked to the TeamPCP group published two malicious versions of LiteLLM (versions 1.82.7 and 1.82.8) on PyPI in late March 2026, which were available for approximately 40 minutes before removal. During this brief window, organizations running automated CI/CD pipelines unknowingly pulled the malicious code. Threat actors claim to have stolen 4TB of sensitive data from Mercor including candidate profiles, internal system data, and potentially OpenAI/Anthropic contractor information. The attack affected thousands of organizations globally, as LiteLLM is present in approximately 36% of cloud environments and receives millions of downloads daily. The breach demonstrates the critical vulnerability of AI infrastructure to supply chain attacks targeting widely-used open-source dependencies.
What Happened
Mercor confirmed a security breach resulting from a sophisticated supply chain attack on LiteLLM, an open-source software tool used across the AI industry. Threat actors compromised LiteLLM's PyPI package distribution, injected malicious code into two versions, and deployed the compromised packages to thousands of organizations globally in a 40-minute window.
Confirmed Facts:
- Mercor is an AI recruitment startup
- Mercor serves as contractor for OpenAI and Anthropic
- Breach linked to LiteLLM supply chain attack
- LiteLLM is open-source tool enabling AI model communication
- Incident occurred in late March 2026
- Malicious LiteLLM versions: 1.82.7 and 1.82.8
- Malicious packages available on PyPI for approximately 40 minutes
- Compromised packages later removed from distribution
- Attack linked to TeamPCP threat group
- TeamPCP used compromised maintainer credentials to publish malicious versions
- LiteLLM receives millions of downloads per day
- LiteLLM present in approximately 36% of cloud environments
- Thousands of organizations affected globally
- Mercor confirmed as one of affected organizations
- Threat actors claim theft of 4TB of data from Mercor
- Mercor initiated incident response and remediation
- Data includes sensitive candidate profiles and internal system data
Attack Timeline:
-
Maintainer Credential Compromise (date not disclosed): TeamPCP obtained compromised credentials for LiteLLM package maintainer account.
-
Malicious Package Publication (late March 2026): Using stolen credentials, TeamPCP published malicious versions 1.82.7 and 1.82.8 to PyPI.
-
Automated Pull Window (40 minutes): Organizations running automated CI/CD pipelines unknowingly pulled malicious code during the brief availability window.
-
Malicious Package Removal: Malicious versions were removed from PyPI; legitimate versions restored.
-
Post-Compromise Persistence: Attackers maintained access to victim systems and exfiltrated data following initial code execution.
-
Data Exfiltration (late March 2026): Mercor and other organizations had data stolen; 4TB allegedly exfiltrated from Mercor.
-
Incident Discovery & Response (late March 2026): Mercor discovered the compromise and initiated incident response procedures.
-
Public Disclosure (April 3, 2026): Breach became public knowledge; threat actor claims surfaced.
What Was Taken
Confirmed Data Exposure:
- Sensitive candidate profiles
- Internal system data
- Data from OpenAI and Anthropic contractor operations
- 4TB total data volume claimed by threat actors
Data Types (based on Mercor business function):
- Candidate personal information and resumes
- Employment application data
- Recruitment communications and assessments
- Internal AI evaluation data
- OpenAI/Anthropic contractor information
- Internal system configurations and API keys
- Development environment data
- System access credentials
Sensitivity Assessment: CRITICAL. AI recruitment contractor data includes:
- Complete candidate profiles and employment history
- Resume and work experience documentation
- Assessment scores and evaluation metrics
- Personal contact information and communications
- OpenAI and Anthropic contractor identities and project information
- Internal AI model evaluation data
- Development environment credentials and API keys
- System architecture and infrastructure documentation
Strategic Impact: The exposure of Mercor data enables:
- Access to OpenAI/Anthropic contractor networks and information
- Intelligence on AI recruitment and talent evaluation processes
- Insider information on AI development priorities
- Compromised credentials enabling further infiltration
- Targeting of recruited AI professionals
- Competitive intelligence on AI development trends
- Identification of key AI talent and contractors
Why It Matters
This attack represents a critical breach of AI industry supply chain infrastructure and demonstrates the massive risk from compromised open-source dependencies used across global cloud environments.
Strategic Significance:
-
AI Infrastructure Supply Chain Compromise: LiteLLM is fundamental infrastructure used across the AI industry for model communication. Compromise of this dependency affected thousands of organizations and potentially millions of end users.
-
Open-Source Maintainer Credential Risk: The attack demonstrates that compromised credentials for widely-used open-source projects can enable injection of malicious code to thousands of organizations within minutes.
-
Massive Blast Radius: LiteLLM presence in 36% of cloud environments and millions of daily downloads meant that a single compromised package affected a huge attack surface across global infrastructure.
-
Automated CI/CD Vulnerability: Organizations using automated dependency updates (common DevOps practice) pulled malicious code automatically without manual review, demonstrating the risk from fully-automated deployment pipelines.
-
OpenAI/Anthropic Contractor Exposure: The breach exposed information about OpenAI and Anthropic's contractor networks, potentially revealing proprietary development partnerships.
-
4TB Data Exfiltration: The 4TB of stolen data from Mercor alone indicates that attackers had significant time post-compromise to exfiltrate large volumes of sensitive information.
-
TeamPCP Attribution: The linking to TeamPCP indicates a sophisticated threat actor group with capability to compromise maintainer credentials and deploy large-scale supply chain attacks.
The Attack Technique
Confirmed Attack Methods:
-
Maintainer Credential Compromise: TeamPCP obtained compromised credentials for LiteLLM package maintainer account (method not disclosed).
-
Malicious Code Injection: Using stolen credentials, attackers published two malicious versions of LiteLLM containing backdoor or data exfiltration code.
-
PyPI Distribution Attack: Malicious packages were published to PyPI (Python Package Index), the official Python package repository trusted by millions of developers.
-
Automated Deployment Exploitation: Organizations with automated CI/CD pipelines and automated dependency updates pulled malicious code automatically without manual review.
-
Post-Compromise Persistence: Malicious code established persistence mechanisms enabling continued access after initial code execution.
-
Data Exfiltration: Attackers used persistent access to exfiltrate 4TB of data from Mercor systems.
Not Disclosed: The source material does not provide details on:
- Specific method used to compromise maintainer credentials (phishing, password reuse, credential stuffing, etc.)
- Specific malicious code injected into LiteLLM versions
- How long attackers maintained access post-compromise
- Specific credential or API key formats stolen
- Whether other organizations' data was exfiltrated at similar scale
- Exact persistence mechanism used by malicious code
- Whether backdoor access remains in victim systems
Attack methodology demonstrates supply chain sophistication targeting fundamental infrastructure rather than direct intrusion.
What Organizations Should Do
For Mercor & AI Companies:
-
Immediate Incident Response & Forensic Investigation — Conduct complete forensic analysis of all systems exposed to LiteLLM malicious code; determine scope of attacker access post-compromise; identify all data exfiltrated; scan systems for persistence mechanisms left by attackers.
-
Credential & API Key Rotation — Immediately rotate all API keys, credentials, and secrets that may have been exposed; scan for unauthorized use of rotated credentials; implement multi-factor authentication on all service accounts.
-
Threat Actor Intelligence & Law Enforcement — Identify and secure all systems containing OpenAI/Anthropic contractor information; notify OpenAI and Anthropic of security incident affecting their contractor networks; report to law enforcement and coordinate with FBI/CISA.
-
Supply Chain Dependency Audit — Audit all open-source dependencies for vulnerabilities and compromises; implement software composition analysis (SCA) tools to detect vulnerable or malicious dependencies; establish process for reviewing and validating open-source package updates.
-
Development Pipeline Security — Implement code signing verification for all dependencies; require manual review of dependency updates before automated deployment; implement runtime behavior monitoring to detect malicious code execution; consider dependency pinning to specific verified versions.
-
Employee & Contractor Notification — Notify all employees and contractors whose data may have been exposed; provide security briefing on potential social engineering targeting; monitor for targeted phishing and account takeover attempts.
For Python/Open-Source Community:
- Implement stronger authentication and verification for package maintainer accounts
- Deploy hardware security keys for maintainer credential protection
- Implement code signing verification for all PyPI packages
- Require maintainer consent/notification for credential changes
- Implement automated malware scanning for published packages
For Organizations Using LiteLLM:
- Immediately audit systems for malicious LiteLLM versions (1.82.7, 1.82.8)
- Upgrade to patched version of LiteLLM
- Rotate all credentials and API keys that may have been exposed
- Scan systems for persistence mechanisms or backdoors
- Monitor network traffic for data exfiltration
- Implement software composition analysis tools
- Enable code signing verification for dependencies
For AI Industry & Development Teams:
- Implement supplier security assessments for all critical open-source dependencies
- Establish incident response procedures for supply chain attacks
- Monitor open-source repositories for suspicious maintainer activity
- Implement automated detection of dependency vulnerabilities
- Consider dependency isolation and sandboxing in CI/CD pipelines
- Establish information sharing for detected supply chain attacks
Sources: AI Firm Mercor Confirms Breach as Hackers Claim 4TB of Stolen Data