Trending Topics
Update: Lazarus Expands Ransomware Playbook, Leveraging Medusa Against U.S. Healthcare
North Korea’s Lazarus Group has expanded its financially motivated operations by leveraging the Medusa ransomware. Security researchers at Symantec assess that a Lazarus Group subgroup, commonly tracked as Andariel or Stonefly and linked to North Korea’s Reconnaissance General Bureau, is now operating as an affiliate within the Medusa ransomware ecosystem. Medusa functions as a ransomware-as-a-service platform, providing affiliates with encryption tooling, leak-site infrastructure, and negotiation frameworks in exchange for a percentage of ransom proceeds. Recent campaigns attributed to the Lazarus subgroup include both successful and disrupted intrusions targeting U.S. healthcare providers, with observed ransom demands averaging approximately $260,000, though some cases have involved multimillion-dollar extortion attempts. Symantec reporting indicates operational overlap with prior Lazarus tradecraft, including the deployment of credential harvesting utilities, custom proxy tunneling tools, and commodity frameworks such as Mimikatz, alongside malware families historically associated with North Korean clusters. Infrastructure patterns, tooling reuse, and behavioral telemetry collectively support the assessment that this activity aligns with known Lazarus operational methodologies rather than independent criminal affiliates. By leveraging Medusa’s established RaaS infrastructure rather than relying solely on proprietary strains such as Maui or Play, Lazarus operators appear to prioritize operational efficiency, plausible deniability, and rapid monetization. Collaboration within a criminal affiliate ecosystem blurs traditional distinctions between state-sponsored advanced persistent threat activity and financially motivated cybercrime. This convergence demonstrates how geopolitical threat actors are increasingly integrating commercial cybercrime tooling and revenue-sharing models to sustain strategic intelligence objectives while diffusing attribution risk through shared infrastructure and affiliate-based operations. Organizations, particularly in the healthcare sector, should prioritize segmenting clinical and administrative networks, enforcing MFA across all remote and privileged access pathways, monitoring for credential dumping and lateral movement, and maintaining tested, offline backups to reduce the operational impact of ransomware.
Anthropic Alleges Industrial-Scale Model Distillation Campaign Targeting Claude
Anthropic disclosed that three Chinese AI laboratories conducted what it described as “industrial-scale” model distillation campaigns against its Claude models. According to Anthropic, DeepSeek, Moonshot AI, and MiniMax generated more than 16 million exchanges through approximately 24,000 fraudulent accounts, bypassing regional restrictions and terms of service. The campaigns allegedly used coordinated proxy networks and large-scale automated prompting to extract high-value capabilities such as agentic reasoning, tool orchestration, coding performance, and chain-of-thought reasoning traces. Anthropic stated that the activity showed structured prompt repetition and synchronization patterns inconsistent with normal usage, indicating deliberate capability harvesting rather than legitimate experimentation. In one case, MiniMax reportedly redirected traffic within 24 hours of a Claude model update to capture new system capabilities. Anthropic emphasized that while distillation is a legitimate training technique when used internally, unauthorized distillation of proprietary models undermines safeguards and raises national security concerns. The company warned that distilled models may not retain safety controls designed to prevent misuse in areas such as cyber operations, surveillance, and potentially sensitive domains, such as biological threat modeling. It further argued that such campaigns complicate export control enforcement by enabling foreign labs to accelerate development through extraction rather than independent training. To counter these efforts, Anthropic has deployed behavioral fingerprinting systems, traffic classifiers, enhanced verification processes, and intelligence-sharing initiatives with other AI providers and cloud platforms. The disclosure follows similar findings from the Google Threat Intelligence Group regarding model-extraction attempts targeting Gemini, signaling a broader escalation in competitive and geopolitical model-distillation activity.
Multi-Stage NPM Supply Chain Attack Uses PNG Steganography to Deploy Pulsar RAT
Security researchers at Veracode uncovered a malicious NPM package, buildrunner-dev, that uses a postinstall hook to initiate a multi-stage Windows infection chain. The package downloads an obfuscated batch file at install time, establishes persistence via the Startup folder, and leverages a fodhelper[.]exe UAC bypass to gain elevated privileges without prompting the user. Once running as admin, it launches a heavily fragmented PowerShell command that profiles installed antivirus products and dynamically selects tailored evasion paths. Rather than hosting binaries directly, the campaign hides payloads inside PNG images stored on a public image service, extracting embedded malware from RGB pixel values using a custom steganographic routine. The first image deploys an AMSI bypass, while the second reveals a GZip-compressed 64-bit [.]NET loader capable of process hollowing and encrypted payload staging. The final stage retrieves and decrypts yet another steganographic image, ultimately deploying the Pulsar remote access trojan entirely in memory. The [.]NET loader demonstrates layered defense evasion, including three distinct AMSI bypass techniques, dynamic API resolution to avoid suspicious imports, AES and TripleDES encryption chains, and per-antivirus persistence logic. It performs 64-bit process hollowing into legitimate Windows processes such as conhost[.]exe, ensuring malicious code runs under a trusted process identity. All command-and-control traffic is extracted from steganographic PNGs, and decrypted payloads are loaded reflectively without writing cleartext binaries to disk. The campaign combines NPM typosquatting, CI evasion checks, obfuscation padding, proxy image hosting, and runtime cryptography to defeat both static and behavioral detection. By targeting developer environments through the software supply chain, the operation transforms routine dependency installation into a stealthy initial access vector. The result is a highly engineered attack that blends open-source abuse, steganography, and in-memory execution to quietly establish full remote control over compromised systems. Organizations should enforce strict dependency governance, block post-install script execution where not explicitly required, monitor for anomalous outbound connections to image-hosting domains, and deploy behavioral EDR controls capable of detecting AMSI tampering, UAC bypass attempts, and process hollowing to disrupt similar steganography-based supply chain attacks early in the kill chain.
Update: OpenClaw Skill Poisoning Campaign Weaponizes AI Agents to Deploy Atomic macOS Stealer1
Threat actors are distributing Atomic macOS Stealer (AMOS) via malicious OpenCL skills, representing a significant evolution from cracked-software delivery to AI-mediated supply-chain compromise. Instead of relying on traditional phishing or fake installers, attackers embed deceptive setup requirements inside SKILL[.]md files that instruct the AI agent to download a supposed prerequisite, “OpenClawCLI,” from attacker-controlled infrastructure. The malicious page hosts a Base64-encoded shell command that decodes into a curl request pulling a remote script from a hard-coded IP address. Depending on the model in use, the agent may execute the command autonomously or repeatedly prompt the user to install a “driver,” framing the action as a legitimate dependency. This shifts the social engineering layer away from direct user deception and toward workflow manipulation, where the AI agent becomes a trusted intermediary that operationalizes the attacker’s instructions. The campaign spans multiple skill marketplaces, including ClawHub and SkillsMP, and involves dozens of overlapping malicious skills that appear as benign utilities, developer tools, or crypto-related automations. While many have been removed, their code persists in public repositories, creating residual exposure for mirrored or self-hosted deployments. Once executed, the infection chain drops a Mach-O universal binary capable of running on both Intel and Apple Silicon systems, signed only with an ad-hoc certificate that fails macOS trust validation. The malware presents a fake password dialog to harvest user credentials and requests Finder control permissions to facilitate broad filesystem access. Unlike earlier AMOS campaigns that emphasized persistence, this variant prioritizes rapid, high-value data theft, targeting Apple and KeePass keychains, browser cookies, and saved credentials, cryptocurrency wallets, Telegram and Discord data, Apple Notes, and a wide range of document types from Desktop, Documents, and Downloads directories. It deliberately ignores .env files despite their API key value, suggesting refined collection priorities aligned to monetizable data. Stolen artifacts are staged locally, compressed into ZIP archives, and exfiltrated via HTTPS POST requests to attacker-controlled infrastructure using structured form data fields for campaign tracking. Organizations adopting AI agent platforms should restrict skill installation to vetted repositories, execute agents within isolated containers or virtualized sandboxes, require explicit user approval for executing external commands, and monitor macOS endpoints for anomalous keychain access, archive staging activity, and outbound data exfiltration over HTTPS to detect AMOS-style abuse of agent workflows.