By
Leanna Seah
March 16, 2026
Updated
March 16, 2026
Scammers are now using AI to completely fabricate convincing identities, altering their appearance and building polished profiles to apply for roles. Gartner projects that by 2028, one in four job applicants could be fake.
AI‑generated resumes, professional headshots, personal websites, and even realistic LinkedIn profiles all help impostors present as the perfect candidate.
Once inside, these bad actors can cause serious damage, stealing sensitive data, planting malware, or gaining unauthorised access to internal systems under the guise of a legitimate employee.
This guide shows you what’s happening and how to spot job applicant scams to safeguard your business.
Why businesses are prime targets for scammers
With money, data, and system access all up for grabs, scammers are exploiting any vulnerability they can find.
Money and financial exploits
Once an impostor gains access, the payoff can be immediate. Impostors may gain access to payment systems, expense workflows, or financial tools, allowing them to divert funds, plant ransomware, or exfiltrate sensitive commercial data. Some also use this access to manipulate vendor accounts or invoice processes for quick monetary gain.
Identity and data theft
By posing as legitimate candidates, they can collect employee directories, org charts, credential formats, and onboarding documents. These details fuel wider identity fraud or allow them to build more realistic synthetic identities for future attacks.
Insider access to company systems
Once onboarded, an impostor can log into internal tools, source code repositories, ticketing systems, and shared drives. This insider position gives attackers a trusted foothold to escalate privileges, deploy malware, or create backdoors.
Overemployment and salary arbitrage
Some scammers use multiple fabricated identities to hold several remote roles simultaneously or subcontract the work at a lower cost, pocketing the salary difference.
Remote‑work vulnerabilities
Digital‑only hiring makes it easier for fake candidates to slip through. Fraudsters use deepfakes, scripted interview answers, AI‑generated voices, and proxy interviewers to impersonate skilled professionals.
How to spot fake job applicants
Look for identity inconsistencies
These are the easiest to miss because scammers rely on volume and automation.
What to watch for:
-
Mismatched names and email handles: Example: “Maria Johnson” applying with robertk.dev@gmail.com. This pattern appears frequently in AI‑generated spam applications.
-
Repeating contact info across multiple applicants: If your ATS shows the same phone number or address appearing for different candidates, that’s a hallmark sign of automated fake profiles.
-
Addresses in commercial zones: Scammers often use non‑residential mailing addresses, rented mailboxes, or co‑working spaces to appear "globally mobile."
Scan for AI‑generated résumé patterns
Fake resumes aren’t sloppy anymore. They’re too perfect, which is its own kind of giveaway. Red flags to look out for include:
-
Identical visual templates: Because nothing says authentic professional like a resume that looks copy‑pasted from 47 others. If every layout, icon, and section order is identical, it's likely generated or duplicated.
-
Overly polished achievements: Immaculate bullet points boasting flawless metrics, and absolutely zero evidence they ever happened. “Improved process efficiency by 40%.” How? With what? For whom? Don’t worry, they won’t say.
-
Hyper-generic skills lists: If the résumé reads like someone asked ChatGPT to “make me sound employable,” trust your instincts. No human has that balanced a skill mix outside of a Marvel movie.
-
Experience that’s too broad to be real: Scammers love the “Why choose one career when you can have seven" approach. Expect gems like "8+ years in five different tech stacks" and "enterprise-level achievements" with no employer listed.
Check employment and education history
Keep a lookout for:
-
Overlapping roles that don’t make sense: For example, two full‑time jobs in two countries at the same time.
-
Employment at companies that no longer exist: Scammers often list defunct companies because they assume no one can verify. This pattern is well‑documented in candidate fraud attempts.
-
Unverifiable degrees or certificates: Fake applicants frequently list education from institutions that aren't accredited, don't match the format of genuine university credentials or use vague descriptions (aka Bachelor of Engineering, London University)
Audit their LinkedIn profile (or notice when there isn’t one)
Before you trust a resume, take a quick look at their LinkedIn profile, assuming they’ve bothered to create one.
-
Broken links or empty profiles: Fake candidates often list LinkedIn URLs that lead nowhere or to accounts with minimal activity.
-
Profile photos that appear “AI‑smooth”: Perfect lighting, perfect symmetry, and no imperfections are classic AI headshot tells.
-
Employment history that doesn’t match the resume: If roles, dates, or industries differ even slightly, the profile was likely generated separately from the resume.
Spot suspicious file and/or link behaviours
If you’ve ever opened a résumé and wondered, “Is this a genuine candidate or am I about to be cyberattacked?", here’s what to watch for.
-
Manually typed URLs: Impostors often send messages asking you to type a URL rather than click a file; e.g.“Please enter my portfolio link in your browser.” These links frequently point to spoofed personal domains designed to look like legitimate resume pages but are actually malware loaders.
-
Password-protected PDFs or ZIP files: Real candidates rarely send encrypted resumes. Scammers use password protection to conceal malicious scripts from security scanners.
-
Resumes hosted on suspicious domains: E.g., “resume‑viewer.xyz” instead of just a normal PDF. These fake pages often auto‑download malware or prompt you to bypass browser warnings.
How to spot fake job applicants
Watch for visual or audio inconsistencies
Lip‑sync delays, unnatural eye movement, facial distortion during motion, or repeated “connectivity issues” can indicate deepfake tools or identity masking. Persistent avoidance of video is another clear warning sign.
Look for inconsistencies across interview stages
If a candidate’s voice, technical depth, or communication style shifts noticeably between calls, it may suggest the use of proxies or multiple individuals participating in the process.
Use a simple real‑time verification check
Asking the candidate to briefly cover part of their face with a hand or slightly turn their head can disrupt basic deepfake filters and reveal visual anomalies.
Probe for concrete details in their story
AI‑generated applications often stay vague. Ask for specific achievements, named stakeholders, metrics, tools used, and constraints faced. Then, verify those details independently where possible.
11 tips to protect your company from AI job applicant scams
1. Verify identity early
For sensitive roles, request government ID during scheduling. In video interviews, run a quick liveness check; Ask them to read a few random words or tilt their head.
2. Independently verify references
Don’t rely on the contacts provided. Confirm employment, education, and certifications through trusted channels or accredited databases.
3. Use live, unscripted work tests
Short, timed exercises reveal real problem‑solving ability and make it harder for candidates to outsource responses. For technical roles, look for consistency in coding style or approach.
4. Rotate interviewers
Spread conversations across different team members and over multiple days. Impostors struggle to keep their story, tone, and expertise aligned across people and time.
5. Limit access to data and systems until checks clear
Hold off on granting access to sensitive data and company systems. In the first 30 to 60 days, monitor for unusual logins, device behaviour, or data movement.
6. Use AI-based tools to detect scammers
Agentic AI tools like Pindrop and Incode can detect deepfake audio/video in real time, flag AI‑generated resumes or cover letters, validate ID documents, and verify candidate identity through biometrics and digital forensics.
7. Embed human-only cues into your job postings
List a deliberately unusual responsibility or requirement in your job posting. (e.g., “making sandwiches" for a computer programming role): Real candidates ignore it, while AI-crafted resumes will tend to mirror it back verbatim, helping you flag automated submissions.
8. Own your brand touchpoints
Add a clear “Recruitment fraud” notice on your careers site. Outline your legitimate hiring process, official email domains, and provide a reporting channel for suspicious outreach.
9. Repeat key warnings across channels
Pin fraud alerts on corporate social profiles and include a brief notice in job descriptions so it remains visible even when posts are scraped by third‑party sites.
10. Monitor for impersonation
Watch for lookalike domains, fake job ads, and spoofed recruiter profiles. Move quickly on takedown requests; impostor listings can cause real reputational damage.
11. Educate your team
Train recruiters to spot red flags: deepfake artefacts, strange URLs, and “resume download” traps (especially when asked to manually type a domain or open ZIP files).
Hire safely and securely with a trusted recruitment partner
Remember that AI isn’t the enemy. But it has made recruitment fraud faster, cheaper, and harder to spot. The fix is a blend of clear candidate communications, early identity and skills verification, and tighter access controls.
Start simple. Be consistent. And make it easy for real candidates (and impossible for fake ones) to move through your process.
With the right safeguards in place and the support of a trusted talent acquisition partner like Airswift, you can protect your organisation and ensure every new hire is exactly who they say they are.
Contact us to learn how we can help you hire with ease and efficiency.