New Stealth Malware Technique “Waiting Thread Hijacking” Bypasses Detection with Minimal Footprint
Researchers at Check Point have detailed a novel malware injection method known as Waiting Thread Hijacking (WTH), which allows malicious code to be executed inside legitimate Windows processes without relying on traditionally flagged APIs. WTH builds on the foundation of classic thread hijacking but eliminates the need for high-alert system calls like SuspendThread or SetThreadContext, which are often monitored and blocked by Endpoint Detection and Response (EDR) systems. Instead, by rewriting their stack return address, it manipulates threads that are naturally in a wait state—typically managed by the Windows Thread Pool and awaiting signals to resume. Once the wait condition is met, the hijacked thread unwittingly jumps to the injected shellcode, allowing attackers to execute code with minimal detection risk. The technique relies on benign system calls—VirtualAllocEx, WriteProcessMemory, and GetThreadContext—to allocate memory, inject payloads, and locate the stack pointer. The malicious shellcode includes logic to preserve the thread’s original state, execute the payload, and then return to the original routine, preserving process stability. The process can be distributed across multiple child processes to obfuscate its footprint further, making behavioral analysis more difficult. Though not fully undetectable, WTH slips past static analysis and evades many behavior-based triggers, particularly those tuned to detect specific API sequences. Its success against some leading EDR platforms highlights the gaps in modern detection and the continued evolution of malware injection tactics. This reinforces the need for defenders to rely on broader behavioral monitoring rather than API-focused rulesets.
PasivRobber Malware Suite Targets macOS with Espionage Capabilities and Cross-Platform Potential
PasivRobber is a newly identified, multi-layered macOS malware suite that combines deceptive system impersonation, persistent surveillance, and data exfiltration. Initially uncovered in March 2025 via a suspicious binary named “wsus” uploaded to VirusTotal, the malware masquerades as legitimate Apple processes—including naming its launcher “goed” to mimic the macOS geolocation daemon “geod.” It deploys through a signed installer package using a developer ID registered to “weihu chen,” which drops an unsigned secondary payload containing core components. The suite includes several binaries—wsus, goed, and center—which handle command execution, screenshot capture, system profiling, and targeted application tampering. Its architecture features 28 modular plugins located in /Library/protect/wsus/bin_arm/plugins/, each designed to harvest sensitive data from specific sources, including Safari, Chrome, Firefox, Outlook, Apple Mail, Foxmail, cloud service configs, and Chinese IM apps like WeChat, QQ, and WeCom. All exfiltrated data is consolidated into a local SQLite database, with logs and configurations protected using Tiny Encryption Algorithm (TEA). PasivRobber embeds Windows binaries within the macOS package, indicating either development from a cross-platform toolkit or preparation for Windows-based payloads, expanding its potential operational footprint. Attribution remains unconfirmed, but strong evidence links PasivRobber to Chinese state-aligned interests. Developer paths contain the name “Meiya,” and the associated certificate is registered to Xiamen Huanya Zhongzhi Technology Partnership Enterprise, linked to Xiamen Meiya Pico Information Co., Ltd.—a Chinese forensics and surveillance software firm previously sanctioned by the U.S. Treasury for ties to the Chinese military-industrial complex. The malware currently affects Intel-based macOS systems and does not show compatibility with Apple Silicon (M1/M2/M3) or iOS devices, although that cannot be ruled out in future versions. There is no evidence it has been deployed against U.S.-based macOS users or distributed widely. Its behavior suggests a highly selective targeting scope for Chinese domestic users, developers, and enterprise professionals. The malware’s ability to inject into running applications, disable protections like system integrity protection, and remotely uninstall itself illustrates an operation built for stealth, persistence, and long-term espionage. PasivRobber’s scope, stealth, and design indicate a well-resourced actor focused on persistent surveillance and sensitive data collection across trusted software environments.
AI Code Hallucinations Fuel “Slopsquatting” Supply Chain Threat
A newly identified software supply chain threat, dubbed slopsquatting, is emerging from how code-generating AI models fabricate non-existent software package names—referred to as package hallucinations. These hallucinated packages often appear credible, leading developers to believe they are legitimate and search for them in public repositories. Attackers can exploit this by uploading malicious packages with those same names to platforms like PyPI or npm, anticipating that developers will unknowingly install them. A collaborative study by researchers from the University of Texas at San Antonio, Virginia Tech, and the University of Oklahoma tested 16 popular LLMs using over half a million code prompts in Python and JavaScript. They found that nearly 20% of the AI-generated packages did not exist, and over half of these hallucinations were repeated across multiple queries, making them predictable and exploitable by threat actors. The risk is compounded by the fact that commercial and open-source models alike hallucinate, although commercial LLMs like GPT-4 do so at significantly lower rates. These hallucinations are not random—they persist consistently and are more likely when prompted with trending topics. In some cases, the fabricated names closely resemble real packages, further increasing the likelihood of developer confusion. Researchers also confirmed that deleted or deprecated packages were not responsible for most hallucinations, debunking one common assumption. The concern is that even if an LLM can sometimes detect its own hallucinations, developers often place too much trust in its output, especially during fast-paced development. This creates a dangerous scenario where malicious actors can proactively plant poisoned packages, turning LLMs into unintentional delivery vehicles for malware in production environments.