Trending Topics

Trending Topics
TRENDING TOPICS FEB 23, 2026

Starkiller PHaaS Raises the Risk of MFA Bypass Phishing

Starkiller is a phishing framework that increases the credibility and scale of credential theft by showing victims the real login page during the attack, rather than a copy that can become outdated. It does this by running a browser in a container and placing attacker infrastructure between the victim and the legitimate service, so the victim interacts with authentic page content while the attacker captures traffic traversing the session. This approach removes many of the visual clues people and security tools often use to spot phishing pages, because the design, scripts, and page behavior come directly from the real website in real time. It also weakens page-fingerprinting and blocklisting methods that rely on detecting a known phishing template, since there may be no static page to analyze. The framework reportedly captures keystrokes, form submissions, cookies, and session tokens, while also providing operators with live visibility into session activity, device details, and basic location data. The most serious risk is that MFA can still be completed by the victim against the real service while the attacker intercepts the resulting authenticated session, allowing account takeover without a second login attempt. The reporting also indicates that Starkiller is being sold as a managed cybercrime platform operated by Jinkusu, with subscriptions, updates, support channels, and a polished dashboard that lowers the barrier for less experienced operators. Users can launch campaigns by entering a target brand URL, while the platform handles technical setup in the background. They can then track campaign performance through visit counts, conversion rates, real-time alerts, and session monitoring features that resemble those of commercial software products. Starkiller also includes URL masking and deceptive link generation to improve click-through rates, and researchers assess email as the most likely delivery path through messages that imitate routine business activity and trusted cloud notifications. The platform is also reported to collect contact information from compromised sessions to support follow-on targeting, which raises the risk of repeat phishing waves and broader internal spread after an initial compromise. An active user forum, Telegram-based support, feature requests, and monthly updates point to a growing ecosystem undergoing ongoing refinement, increasing the likelihood of faster adoption and more capable campaigns over time. These findings reinforce that MFA is still necessary but not sufficient on its own, and organizations should strengthen phishing-resistant authentication, tighten session token controls and rapid revocation processes, improve email and link screening, and update user awareness training so employees verify the destination and context before signing in.

jsPDF PDF Injection Flaw Creates Broad Application Risk

A high-severity flaw in jsPDF could expose a large number of web applications to malicious PDF manipulation, as the library is widely used to generate documents in browsers from user-supplied content. The issue, tracked as CVE-2026-25755, affects versions prior to 4.1.0 and has a reported CVSS score of 8.8, indicating a serious risk for production environments that generate invoices, reports, and user-facing exports. The weakness is tied to the addJS function, which inserts untrusted input into a PDF JavaScript stream without proper sanitization, allowing an attacker to escape the intended content and inject new PDF objects. In practical terms, a crafted form field or comment can turn a normal generated PDF into a weaponized file that performs unwanted actions when opened. This is more concerning than a standard browser script issue because it alters the PDF structure itself, allowing the payload to work through viewer behavior rather than relying solely on script execution within a web page. The scale of exposure is amplified by jsPDF's adoption, with more than 1.5 million weekly npm downloads, indicating the flaw can spread risk across many unrelated products simultaneously. The reported impact goes beyond simple pop-up proof-of-concept behavior, because attackers may tamper with document actions, metadata, annotations, and other PDF objects in ways that support deception, phishing, or document trust abuse across different viewers. The advisory also notes that some malicious actions can still be triggered via PDF mechanisms even when JavaScript support is limited or disabled in the viewer, which increases reliability and makes detection harder for users who assume viewer restrictions are sufficient. Since many teams use jsPDF deeply within export workflows, this can become a quiet supply chain problem: a normal business feature becomes the delivery path for malicious content if user input is not tightly controlled. The immediate fix is to upgrade to jsPDF 4.1.0 or later, as the patch reportedly escapes special characters in addJS and related paths. Teams should also review where untrusted input enters PDF generation and enforce strict validation before document creation. It is also important to scan dependencies with package security tools, test generated PDFs across multiple viewers to catch abnormal behavior, and prioritize patching in internet-facing or customer-facing workflows where generated PDFs are shared externally. Organizations that treat this as a routine library update risk missing broader exposure, as any unpatched application that builds PDFs from user data can become an entry point for downstream fraud, phishing, or trust abuse.

LLM Passwords Look Strong but Are Often Weak

New research shows that passwords generated by large language models can appear complex while remaining far more predictable than passwords created by secure random generators. A proper password generator uses cryptographic randomness to produce characters from an even distribution, which makes each position difficult to guess, but an LLM is built to predict the next character based on learned patterns and probability. That design creates visible complexity while still pushing output toward repeated structures, favored characters, and recurring prefixes that attackers can learn over time. In testing across major models, researchers observed repeated outputs, limited character variety, and strong clustering around similar formats even when the passwords included uppercase letters, lowercase letters, numbers, and symbols. One set of tests found only 30 unique results from 50 password requests, with one 16 character password appearing 18 times, which is a major warning sign for any security control. The findings also showed that many generated passwords avoided repeated characters within a single string and reused a narrow set of symbols and letters, which further reduces real randomness. The risk becomes more serious because standard password strength tools can score these passwords as excellent even when the true entropy is much lower once the model’s bias is measured. Researchers reported that a password that appears to provide around 98 bits of entropy in a truly random system may deliver closer to roughly 27 bits in practice, and some longer AI-generated passwords may fall even lower based on character probability analysis, which can shrink brute force effort from unrealistic to feasible. They also found LLM-style passwords and secrets in public code repositories, configuration files, and technical documents, showing that this problem has already moved from theory into real development workflows as AI coding assistants become more common. This creates a practical attack path because defenders may assume a long mixed password is safe while attackers can prioritize common LLM output patterns and reduce the search space dramatically. Organizations should treat any password, API key, or secret produced by a general-purpose LLM as untrusted, rotate exposed credentials, and require cryptographic password generation through password managers, operating system randomness APIs, or vetted tooling in development pipelines and CI workflows. Teams building AI assistants should also block direct password generation, force secure randomness tool calls when secrets are needed, and clearly document that behavior so insecure AI-generated credentials do not quietly enter production systems.

💡
Hunter Strategy encourages our readers to look for updates in our daily Trending Topics and on Twitter.

Read more