North Korea's Fake IT Worker Infiltration Campaign
North Korea's Fake IT Worker
Infiltration Campaign
What HR needs to know about the most sophisticated hiring fraud operation in the world and how to stop it.
Executive Summary
North Korea is running an industrial-scale scheme to place its own citizens as remote employees inside Western companies. These workers are technically skilled, hold fabricated identities backed by AI-generated documents, and funnel their salaries to fund the North Korean regime in direct violation of international sanctions. Once inside, some steal sensitive data or extort employers. Microsoft, the Department of Justice, and independent researchers have confirmed thousands of infiltrations across the U.S. and globally, including Fortune 500 companies and government agencies. HR is the primary line of defense. The vetting and onboarding process is where these actors are caught or where they succeed.
The Threat: Who They Are and What They Want
Since at least 2020, North Korea has deployed a workforce of thousands of skilled IT professionals to apply for remote jobs at companies around the world. These workers are physically located in North Korea, China, and Russia but they present themselves as U.S.-based (or locally-based) software developers, engineers, and IT specialists. The operation is tracked by Microsoft Threat Intelligence under the name Jasper Sleet (formerly Storm-0287), with related clusters called Moonstone Sleet and Coral Sleet.
The goal is revenue. Salaries are funneled back to Pyongyang to fund the regime's weapons programs, bypassing international sanctions. But money isn't the only risk. Once employed, some DPRK workers pivot to data theft, steal source code and intellectual property, or in the most aggressive cases extort their employers by threatening to leak company data if not paid.
A separate but related operation, known as "Contagious Interview," flips the target: instead of getting hired, these actors pose as recruiters to trick real job candidates into downloading malware during a fake technical interview process. Together, the two schemes weaponize the hiring process from both sides.
Sanctions risk: Knowingly or unknowingly paying a North Korean national violates U.S. and international sanctions. The company not just the employee bears legal exposure. Two DPRK nationals were federally indicted in January 2025 for this activity.
How It Works: The Attack Chain
The operation is not improvised it follows a disciplined playbook documented by Microsoft researchers in detail. HR professionals touch multiple points of this chain and have real visibility at each stage if they know what to look for.
Critically, researchers found that DPRK actors are now using AI tools to systematically analyze job postings extracting required skills, certifications, and role-specific language and then auto-generating tailored résumés and cover letters for each application. This is not one person applying to jobs. It is an automated pipeline processing hundreds of applications at scale.
Key Tactics Used by the Threat Actors
AI-Generated Identity Documents
DPRK workers use tools like Faceswap to transplant their photos into stolen employment documents and government IDs. The same photo appears across multiple résumés with different backgrounds a reliable forensic signal. AI also polishes résumés to eliminate grammatical errors that previously revealed foreign authorship.
Voice-Changing Software
Workers use real-time voice modification tools during phone screens and video interviews to mask accents and pass audio scrutiny. Interviews may be conducted entirely with altered voices. They also use facilitators who may conduct the interview on behalf of the actual worker.
Laptop Farms & Facilitators
A local accomplice (inside the target country) receives company hardware and sets up a "laptop farm" a physical location with multiple employer-issued computers. The DPRK worker remotely accesses these machines via Remote Monitoring and Management (RMM) tools, appearing to log in locally. Facilitators also manage payroll and money laundering.
Reused Personas Across Companies
Rather than creating a unique identity for every infiltration, workers reuse the same names and profiles across multiple companies simultaneously. One exposed repository contained résumés, email accounts, VPN credentials, and tracking sheets for multiple concurrent engagements operated like a business.
Fake Freelance & Staffing Pipelines
Workers apply through staffing agencies and freelance platforms (Upwork, Freelancer) to add layers of distance from the employer. Staffing company employees can be witting accomplices. This diffuses accountability and makes background checks harder to execute.
"Contagious Interview" Targeting Job Seekers
In this companion operation, fake recruiters approach real developers on LinkedIn and GitHub. Candidates receive a "coding challenge" that when executed installs BeaverTail or OtterCookie malware. The malware steals credentials, cryptocurrency wallets, and opens backdoor access to the victim's employer network. Compromised code repositories then spread the malware further through the software supply chain.
As documented in Microsoft's March 2026 report on AI as tradecraft, generative AI now serves as a force multiplier across the entire DPRK operation from writing tailored résumés and scripting interview responses, to translating content, summarizing stolen data, and debugging malware. AI has removed the technical and linguistic barriers that previously made these operations detectable. The same résumé quality that once signaled a senior candidate now also covers a state-sponsored fraud operation.
How to Detect It: Red Flags for HR
The good news: most of these signals are visible during standard HR workflows screening, interviews, onboarding, and ongoing employment. Microsoft's April 2026 detection guidance specifically calls out HR SaaS platforms (like Workday) as detection surfaces, noting that anomalous patterns in candidate behavior can be observed at the application and post-hire stages.
| Stage | Red Flag | Severity |
|---|---|---|
| Application | Candidate's social media and professional profiles are very new, thin, or show no organic history. Profile photos look AI-generated or appear in reverse image searches linked to multiple identities. | High |
| Application | Contact information (phone, address) doesn't match the stated location or is a VOIP number with no verifiable history. Email domain is generic or newly created. | High |
| Application | Résumé is unusually polished no gaps, no stylistic inconsistencies, overly optimized for keyword matching. Multiple submitted résumés that are suspiciously similar in format. | Medium |
| Screening | Candidate is reluctant to use video. When video is used, the image appears filtered, lighting is unnatural, or there is a lag inconsistent with a low-latency domestic connection. | High |
| Interview | Audio quality is inconsistent with what you'd expect, or voice has an electronic quality. Candidate deflects when asked about their local environment (office setup, time zone, local news). | High |
| Interview | Someone other than the stated applicant appears to be assisting during technical portions. Responses are unusually fast on complex questions but slow on simple personal ones. | High |
| Onboarding | New hire requests that company hardware be shipped to an address different from their stated residence. Asks to use personal devices or remote desktop tools instead of company-managed equipment. | High |
| Onboarding | Payroll account is set up to route to cryptocurrency or through an intermediary transfer service. Requests for advance payment or unusual pay structure. | High |
| Employment | Employee works hours inconsistent with their stated time zone (active late at night in their local time). Logs in from IP addresses tied to VPNs or foreign countries. | Medium |
| Employment | Staffing company employee background check is difficult to verify independently. References are unverifiable or share the same email domain. | Medium |
Microsoft's April 2026 detection research notes that DPRK actors have been observed making automated queries to Workday Recruiting API endpoints from known actor infrastructure essentially scanning for open roles programmatically before a human ever applies. If your security team monitors your HR platform's access logs, unusual API activity from VPN exit nodes on career-facing endpoints is a detectable pre-recruitment signal.
HR Mitigation Recommendations
The following recommendations are drawn directly from Microsoft's published guidance, adapted for HR workflows. They are ordered by impact the first three alone would catch the majority of known infiltrations.
-
1Require live, unfiltered video for all remote candidate interviews. Mandate that candidates show their face on an unmodified camera in a verifiable environment. Ask them to tilt the camera and show their surroundings briefly. Use video as the primary ID verification step not just audio. If a candidate refuses or offers only audio, treat it as a significant flag.
-
2Verify identity documents against a secondary, independently-sourced contact. Do not rely on documents alone. Cross-check that the stated identity has a verifiable, organic digital history social media activity over time, professional history with contactable references, consistent physical address. Use reverse image search on any profile photo. If the identity is "too clean," investigate further.
-
3Control where company hardware is shipped and who receives it. Require that laptops and equipment ship to the address on file for the candidate, and confirm delivery. Be suspicious of any request to forward hardware to a third party or a different address. New hire access should not be provisioned until the device is confirmed in the hands of the named employee.
-
4Apply heightened scrutiny to staffing agency and contractor hires. DPRK workers frequently enter via staffing companies, sometimes with the staffing company's knowledge. Request direct verification of the candidate's identity not just the agency's assurance. Understand that the staffing company may not have done adequate verification themselves, or may be a witting participant.
-
5Check for consistency résumé, LinkedIn, GitHub, and references should align. DPRK actors often reuse the same persona name across multiple applications. Search the candidate's name, email, and claimed work history across platforms. If their GitHub account was created last month but their résumé claims 10 years of experience, that's a red flag requiring explanation.
-
6Flag unusual payroll routing at onboarding. Work with Finance and Payroll to flag cryptocurrency payment requests, transfers through money-transfer services, or accounts that route through foreign intermediaries. DPRK workers need to move money out of the country the payroll setup is where that begins.
-
7Establish a clear escalation path to Security for any suspicions during hiring. HR teams should not be expected to make final determinations on potential nation-state fraud. Define a process: if an HR recruiter encounters three or more red flags from the table above, they escalate immediately to the security or insider threat team for investigation without tipping off the candidate.
-
8Train recruiting staff annually on this specific threat. This is not a generic phishing awareness topic. Recruiters need scenario-based training that walks through what a DPRK application actually looks like the polished résumé, the LinkedIn with no history, the camera-off interview request. Microsoft has published detailed case examples that can anchor this training.
If you suspect an active hire is a DPRK worker: Do not confront the employee or tip them off. Escalate immediately to your Security and Legal teams. Evidence preservation and coordinated offboarding are required to avoid tipping off the broader network and to support any law enforcement referral. Past cases show these workers will exfiltrate data or threaten disclosure if they believe exposure is imminent.
Source Articles
This brief was prepared from the following primary sources. Readers are encouraged to review the originals for technical detail and indicators of compromise.
- Dark Reading "DPRK Fake Job Scams Self-Propagate in 'Contagious Interview'" (April 2026)
- Microsoft Security Blog "Jasper Sleet: North Korean Remote IT Workers' Evolving Tactics to Infiltrate Organizations" (June 2025)
- Microsoft Security Blog "AI as Tradecraft: How Threat Actors Operationalize AI" (March 2026)
- Microsoft Security Blog "Detection Strategies Across Cloud and Identities Against Infiltrating IT Workers" (April 2026)