Cyber Security News
Vercel Breached Through an AI Productivity Tool
Vercel confirmed on April 19 that attackers accessed internal systems, source code, NPM tokens, GitHub tokens, and employee data after compromising an employee's Google Workspace account. The entry point wasn't a phishing email or a credential dump, it was Context.ai, a third-party AI tool the employee had authorized via OAuth. The tool's inherited permissions gave attackers the keys, and ShinyHunters is now demanding $2 million for the data.
The downstream risk is significant. Vercel hosts frontends for a large slice of the web, including crypto and Web3 projects where leaked NPM and GitHub tokens translate directly into supply chain compromise. The company has urged customers to rotate environment variables and secrets regardless of official notification, which is itself a tell about how much blast radius remains unquantified.
Why it matters: OAuth consent to AI tools is now a first-class attack surface, and most organizations have zero inventory of which agents hold which scopes inside their Workspace tenants.
Sources: CyberInsider | InfoWorld | TechRadar
ShinyHunters Runs a Coordinated "Pay or Leak" Week
On April 18, ShinyHunters posted Carnival Corporation (8.7M records), Zara, and 7-Eleven to its extortion portal with an April 21 deadline, on top of a confirmed 13.5M-record McGraw Hill dump traced to a Salesforce misconfiguration and the live Vercel extortion. The common thread across targets is SaaS: Salesforce instances, Snowflake warehouses, and OAuth-connected tooling, not hardened primary infrastructure.
This is the same playbook that hit the LAUSD/Edgenuity student database via Snowflake earlier in the month. Attackers are systematically working the seams where enterprise data sprawls across third-party platforms with weaker governance than the mothership.
Why it matters: Your data exposure is now the weakest SaaS integration in your vendor tree, and most organizations do not audit Salesforce/Snowflake tenant configurations with the same rigor as on-prem.
Sources: Cybernews | BleepingComputer | The Register
Payouts King Hides Ransomware Inside QEMU Virtual Machines
A new ransomware operation, Payouts King, believed to be staffed by former BlackBasta affiliates, is running its entire operational toolchain inside hidden QEMU virtual machines on compromised hosts. The VMs establish reverse SSH backdoors and execute encryption routines in an environment EDR and AV cannot see into. Initial access is typically via exposed SonicWall/Cisco SSL VPNs or unpatched SolarWinds Web Help Desk instances.
The technique isn't wholly new, Sophos documented QEMU abuse last year, but Payouts King has operationalized it at scale, pairing it with hardened obfuscation, strong cryptography, and selective encryption. The social engineering playbook (helpdesk vishing, Teams impersonation targeting executives) comes straight from the BlackBasta lineage.
Why it matters: EDR built on host-process visibility is blind to a guest OS running inside an emulator, and defenders need hypervisor-aware telemetry or outbound network controls that detect anomalous nested virtualization.
Sources: BleepingComputer | Security Affairs | GBHackers
Three Microsoft Defender Zero-Days, Two Still Unpatched
Attackers are actively exploiting three Defender zero-days, codenamed BlueHammer, RedSun, and UnDefend, to elevate privileges and disable detection on fully patched Windows hosts. As of April 17, two of the three remain without vendor patches. A public PoC demonstrates Defender being tricked into rewriting malicious files into protected locations, yielding SYSTEM-level execution through the very tool meant to stop it.
This lands against the backdrop of Microsoft's April Patch Tuesday, which fixed 167 vulnerabilities including CVE-2026-33824 (CVSS 9.8 unauthenticated RCE in Windows IKE) and an actively exploited SharePoint zero-day. The Defender chain, though, is the standout, when the endpoint detection product itself becomes the privilege escalation primitive, alert suppression is built in.
Why it matters: "Defender is enabled" is no longer a compensating control, and organizations relying on it as a single detection layer need secondary behavioral telemetry immediately.
Sources: Security Affairs | CSO Online | BleepingComputer
APT28 Compromises 170+ Ukrainian Anti-Corruption Prosecutor Accounts
Ukraine's CERT confirmed that Russian military intelligence (APT28/Fancy Bear) has compromised over 170 email accounts at the Specialized Anti-Corruption Prosecutor's Office and the Asset Recovery and Management Agency. Researchers separately identified "SlimAgent," a 64-bit DLL deployed post-exploitation for screen capture, clipboard, and keystroke logging (MD5: 889B83D375A0FB00670AF5276816080E). Parallel reporting has APT28 running a DNS-hijacking router botnet campaign flagged by NCSC on April 15.
The target selection is the story. APT28 isn't chasing classified military traffic here, it's inside the legal machinery Ukraine uses to trace and seize Russia-linked assets. Strategic intelligence against sanctions enforcement is now a cyber priority on par with battlefield systems.
Why it matters: Espionage against legal/financial oversight bodies is a preview of where adversary targeting goes when kinetic options plateau, defenders in legal, compliance, and regulator-adjacent orgs should expect more of this.
Sources: The Straits Times | GBlock | Prevenity
WordPress Plugin Supply Chain: 31 Plugins Backdoored Via Acquisition
Thirty-one WordPress plugins were pulled in April after a buyer on the Flippa marketplace planted backdoors in the first SVN commit following ownership transfer. The attacker waited approximately eight months between planting and activation, a deliberate dormancy window designed to outlast acquisition-related security review.
This is supply chain attack via M&A. The open-source plugin economy has no meaningful escrow on ownership transfers, no mandatory review of first-party-post-transfer commits, and no standard disclosure when a maintainer handle changes hands. Eight-month dwell time means most WP install bases saw multiple "routine" updates before the trap triggered.
Why it matters: Ownership changes in dependency maintainers deserve the same scrutiny as typosquatting, and "pinned to the previous version before ownership changed" needs to become a standard policy for plugin and package management.
Sources: GBHackers | BleepingComputer | The Register
AI News
Claude Opus 4.7 Ships with Breaking API Changes
Anthropic released Claude Opus 4.7 on April 16, posting 87.6% on SWE-bench Verified and 64.3% on SWE-bench Pro, with a new "xhigh" effort level and upgraded visual acuity (up to 2,576px). The release is explicitly tuned for long-horizon agentic work, the model follows instructions literally rather than loosely, which forced enterprise integrators to retune prompts and update SDK integrations rather than hot-swap model IDs.
The breaking change matters more than the benchmarks. Anthropic is treating Opus as serious infrastructure now, not a drop-in replacement, and the release coincided with a dense week of agentic framework updates (CrewAI 1.14.2, LangGraph 1.1.8, Google ADK 1.31.0). Anthropic also announced an 800-person London office, quadrupling its UK footprint days after OpenAI's own UK expansion.
Why it matters: "Literal instruction following" and breaking API changes signal a model class where prompts are becoming programs, and prompt debt is the new tech debt.
Sources: Knowledge Hub Media | CNBC | openclawd.in
Anthropic Launches Claude Managed Agents
Anthropic opened a public beta for Claude Managed Agents, a hosted agent harness (/v1/agents, /v1/environments, /v1/sessions) that bundles the agent loop, sandboxed tool execution, and state persistence into the provider stack. Developers no longer need to build their own runtime, orchestration layer, or sandbox infrastructure to deploy autonomous agents.
This is the infrastructure shift underneath the capability story. By absorbing execution and sandboxing into the provider, Anthropic is competing directly with middleware layers like Zapier and n8n, while Amazon's AgentCore, Databricks' Agent Bricks, and Salesforce's Agent Fabric are making similar bets. The model is no longer the product, the governed runtime is.
Why it matters: The agent-platform battle has shifted from "whose model is smartest" to "whose runtime will enterprises trust with production access to their data and tools."
Sources: Ability.ai | Databricks | Salesforce
OpenAI Codex Becomes a Desktop Agent
OpenAI shipped a major Codex update on April 16 transitioning it from coding assistant to full desktop agent, it can now see, click, and type across any application on a Mac, supports 90+ plugins, and operates in parallel with the user. Recent reviews characterize the shift as moving from "Copilot" (suggesting code) to "Agent" (executing software engineering tasks end-to-end).
The OS-layer play matters. LLM providers are now competing for the primary user interface, pushing past the IDE and into general-purpose computer use. Anthropic's Claude Code Routines (autonomous bug-fixing and PR review) sits in the same territory.
Why it matters: If agents running on the desktop OS becomes the default, every credential cached in a browser session is effectively an agent credential, and current enterprise IAM is not built for that.
Sources: Inside Telecom | Zenvanriel
NIST Opens RFI on Securing AI Agents
The Center for AI Standards and Innovation (CAISI) at NIST issued a Request for Information on securing AI agents, systems that plan autonomously and take actions in external environments. The scope explicitly extends beyond output filtering and model alignment to cover authorization, tool-use boundaries, and post-action accountability. This lands alongside NIST's broader restructuring of the National Vulnerability Database toward a risk-based triage model as CVE submissions surged 263% since 2020.
Parallel regulatory pressure is closing in: the EU AI Act's hiring bias audit requirements begin enforcement ~105 days out, and major insurance carriers have started excluding AI liability from corporate policies.
Why it matters: Regulators have accepted that "agents that act" need a different security model than "chatbots that talk", enterprises deploying autonomous workflows should start building the authorization and audit trails now, before the specs are finalized.
Sources: Rockville Nights | GBHackers | Asanify
GPT-5.4-Cyber and Claude Mythos: The Restricted-Access Tier
OpenAI released GPT-5.4-Cyber this week, a "cyber-permissive" variant tuned for defensive security work, with a lowered refusal boundary, distributed only to vetted partners. It arrived a week after Anthropic's Claude Mythos Preview, described internally as capable of finding bugs that sat undiscovered for 27 years and restricted from general availability after the capability leak prompted an emergency federal briefing. Reports indicate NSA and DoD are using Mythos despite Anthropic being flagged as a supply chain concern.
A two-tier model market is now visible: publicly available frontier models with safety filters, and a shadow tier of restricted-access "permissive" models operating under memoranda with national security customers.
Why it matters: The defensive capability in these restricted models almost certainly implies equivalent offensive capability, and the asymmetry in access defines who gets to hunt bugs first.
Sources: Reuters | Mashable | Medium
Active Exploitation Watchlist + Notable CVEs
| CVE | Product | Severity | Status | Action |
|---|---|---|---|---|
| CVE-2026-21643 | Fortinet FortiClient EMS | 9.1 Critical | Actively Exploited | Patch Now |
| CVE-2026-34197 | Apache ActiveMQ Classic | 8.8 High | Actively Exploited | Patch Now |
| CVE-2026-1340 | Ivanti EPMM | 9.8 Critical | Actively Exploited | Patch Now |
| CVE-2026-32201 | Microsoft SharePoint Server | 6.5 Medium | Actively Exploited | Patch Now |
| CVE-2026-33032 | nginx-ui | 9.8 Critical | Actively Exploited | Patch Now |
| CVE-2026-39987 | Marimo Python Notebook | 9.3 Critical | Actively Exploited | Patch Now |
| CVE-2026-34621 | Adobe Acrobat Reader | 8.8 High | Actively Exploited | Patch Now |
| CVE-2026-33824 | Windows IKE | 9.8 Critical | Patch Available | Patch Now |
| CVE-2026-39808 | Fortinet FortiSandbox | 9.1 Critical | POC Public | Patch Now |
| CVE-2009-0238 | Microsoft Excel | 9.3 Critical | Actively Exploited | Mitigate |
The Edge
The week's big story isn't a breach, a model, or a CVE, it's that we just watched the seam between "AI productivity tool" and "production credential store" tear open in public, and almost nobody is positioned to defend it. Vercel was not compromised because its infrastructure was weak. It was compromised because an employee clicked "Allow" on an OAuth consent screen for an AI tool called Context.ai, and that consent grant turned into source code, NPM tokens, and GitHub tokens in the hands of ShinyHunters. Meanwhile, the same week, Anthropic shipped Claude Managed Agents, OpenAI turned Codex into a desktop agent, and enterprises rushed to adopt both. We are accelerating into a world where thousands of agents hold OAuth scopes into our Workspaces, repos, and CRMs, at precisely the moment we proved the OAuth consent surface is the soft underbelly.
The uncomfortable observation: most security programs still treat "agent" as a product category rather than as a new kind of privileged user. There is no SIEM rule for "an AI integration was granted admin scope on Monday and exfiltrated the repo on Friday." There is no SOAR playbook for "a Salesforce connector owned by a departed employee's personal AI tool." NIST's RFI on AI agent security is a useful tell, regulators now see this gap clearly, but the standards won't land in time. The ShinyHunters campaign against Salesforce and Snowflake tenants, the Payouts King QEMU trick, the Defender zero-days, and the Vercel OAuth breach are all the same story from different angles: the defensive perimeter is not where you think it is, and the attackers already know that.
What defenders should watch for in the next 30 days: OAuth grant audits in Google Workspace and Microsoft 365, inventory of which AI tools hold scopes inside Salesforce and Snowflake, and a hard conversation about whether "Copilot-class" integrations should be allowed to install without security review. If you ship agents to production before you can answer "what can this agent do if compromised," you are underwriting the next Vercel with your own infrastructure.