[agents/model-providers] [xai-auth] bootstrap config fallback: no config-backed key found

title: "Intel Brief: Mercor AI — LiteLLM Supply Chain Attack Breach" date: 2026-04-05 slug: mercor-ai-breach-4tb-stolen-data


Intel Brief: Mercor AI — LiteLLM Supply Chain Attack Breach

Mercor, an AI recruitment startup serving as contractor for OpenAI and Anthropic, confirmed a major data breach resulting from a supply chain attack on LiteLLM, an open-source tool enabling communication between different AI models. Hackers linked to the TeamPCP group published two malicious versions of LiteLLM (versions 1.82.7 and 1.82.8) on PyPI in late March 2026, which were available for approximately 40 minutes before removal. During this brief window, organizations running automated CI/CD pipelines unknowingly pulled the malicious code. Threat actors claim to have stolen 4TB of sensitive data from Mercor including candidate profiles, internal system data, and potentially OpenAI/Anthropic contractor information. The attack affected thousands of organizations globally, as LiteLLM is present in approximately 36% of cloud environments and receives millions of downloads daily. The breach demonstrates the critical vulnerability of AI infrastructure to supply chain attacks targeting widely-used open-source dependencies.

What Happened

Mercor confirmed a security breach resulting from a sophisticated supply chain attack on LiteLLM, an open-source software tool used across the AI industry. Threat actors compromised LiteLLM's PyPI package distribution, injected malicious code into two versions, and deployed the compromised packages to thousands of organizations globally in a 40-minute window.

Confirmed Facts:

Attack Timeline:

  1. Maintainer Credential Compromise (date not disclosed): TeamPCP obtained compromised credentials for LiteLLM package maintainer account.

  2. Malicious Package Publication (late March 2026): Using stolen credentials, TeamPCP published malicious versions 1.82.7 and 1.82.8 to PyPI.

  3. Automated Pull Window (40 minutes): Organizations running automated CI/CD pipelines unknowingly pulled malicious code during the brief availability window.

  4. Malicious Package Removal: Malicious versions were removed from PyPI; legitimate versions restored.

  5. Post-Compromise Persistence: Attackers maintained access to victim systems and exfiltrated data following initial code execution.

  6. Data Exfiltration (late March 2026): Mercor and other organizations had data stolen; 4TB allegedly exfiltrated from Mercor.

  7. Incident Discovery & Response (late March 2026): Mercor discovered the compromise and initiated incident response procedures.

  8. Public Disclosure (April 3, 2026): Breach became public knowledge; threat actor claims surfaced.

What Was Taken

Confirmed Data Exposure:

Data Types (based on Mercor business function):

Sensitivity Assessment: CRITICAL. AI recruitment contractor data includes:

Strategic Impact: The exposure of Mercor data enables:

Why It Matters

This attack represents a critical breach of AI industry supply chain infrastructure and demonstrates the massive risk from compromised open-source dependencies used across global cloud environments.

Strategic Significance:

  1. AI Infrastructure Supply Chain Compromise: LiteLLM is fundamental infrastructure used across the AI industry for model communication. Compromise of this dependency affected thousands of organizations and potentially millions of end users.

  2. Open-Source Maintainer Credential Risk: The attack demonstrates that compromised credentials for widely-used open-source projects can enable injection of malicious code to thousands of organizations within minutes.

  3. Massive Blast Radius: LiteLLM presence in 36% of cloud environments and millions of daily downloads meant that a single compromised package affected a huge attack surface across global infrastructure.

  4. Automated CI/CD Vulnerability: Organizations using automated dependency updates (common DevOps practice) pulled malicious code automatically without manual review, demonstrating the risk from fully-automated deployment pipelines.

  5. OpenAI/Anthropic Contractor Exposure: The breach exposed information about OpenAI and Anthropic's contractor networks, potentially revealing proprietary development partnerships.

  6. 4TB Data Exfiltration: The 4TB of stolen data from Mercor alone indicates that attackers had significant time post-compromise to exfiltrate large volumes of sensitive information.

  7. TeamPCP Attribution: The linking to TeamPCP indicates a sophisticated threat actor group with capability to compromise maintainer credentials and deploy large-scale supply chain attacks.

The Attack Technique

Confirmed Attack Methods:

  1. Maintainer Credential Compromise: TeamPCP obtained compromised credentials for LiteLLM package maintainer account (method not disclosed).

  2. Malicious Code Injection: Using stolen credentials, attackers published two malicious versions of LiteLLM containing backdoor or data exfiltration code.

  3. PyPI Distribution Attack: Malicious packages were published to PyPI (Python Package Index), the official Python package repository trusted by millions of developers.

  4. Automated Deployment Exploitation: Organizations with automated CI/CD pipelines and automated dependency updates pulled malicious code automatically without manual review.

  5. Post-Compromise Persistence: Malicious code established persistence mechanisms enabling continued access after initial code execution.

  6. Data Exfiltration: Attackers used persistent access to exfiltrate 4TB of data from Mercor systems.

Not Disclosed: The source material does not provide details on:

Attack methodology demonstrates supply chain sophistication targeting fundamental infrastructure rather than direct intrusion.

What Organizations Should Do

For Mercor & AI Companies:

  1. Immediate Incident Response & Forensic Investigation — Conduct complete forensic analysis of all systems exposed to LiteLLM malicious code; determine scope of attacker access post-compromise; identify all data exfiltrated; scan systems for persistence mechanisms left by attackers.

  2. Credential & API Key Rotation — Immediately rotate all API keys, credentials, and secrets that may have been exposed; scan for unauthorized use of rotated credentials; implement multi-factor authentication on all service accounts.

  3. Threat Actor Intelligence & Law Enforcement — Identify and secure all systems containing OpenAI/Anthropic contractor information; notify OpenAI and Anthropic of security incident affecting their contractor networks; report to law enforcement and coordinate with FBI/CISA.

  4. Supply Chain Dependency Audit — Audit all open-source dependencies for vulnerabilities and compromises; implement software composition analysis (SCA) tools to detect vulnerable or malicious dependencies; establish process for reviewing and validating open-source package updates.

  5. Development Pipeline Security — Implement code signing verification for all dependencies; require manual review of dependency updates before automated deployment; implement runtime behavior monitoring to detect malicious code execution; consider dependency pinning to specific verified versions.

  6. Employee & Contractor Notification — Notify all employees and contractors whose data may have been exposed; provide security briefing on potential social engineering targeting; monitor for targeted phishing and account takeover attempts.

For Python/Open-Source Community:

For Organizations Using LiteLLM:

For AI Industry & Development Teams:

Sources: AI Firm Mercor Confirms Breach as Hackers Claim 4TB of Stolen Data