New research shows AI crawlers like Perplexity, Atlas, and ChatGPT are surprisingly easy to fool.
TechTarget and Informa Tech’s Digital Business Combine.TechTarget and InformaTogether, we power an unparalleled network of 220+ online properties covering 10,000+ granular topics, serving an audience of 50+ million professionals with original, objective content from trusted sources. We help you gain critical insights and make more informed decisions across your business priorities.New research shows AI crawlers like Perplexity, Atlas, and ChatGPT are surprisingly easy to fool.October 29, 2025AI search tools like Perplexity, ChatGPT, and OpenAI's Atlas browser offer powerful capabilities for research and information gathering but are also dangerously susceptible to low-effort content manipulation attacks.It turns out websites that can detect when an AI crawler visits can serve completely different content than what human visitors see, allowing bad actors to serve up poisoned content with surprising ease. To demonstrate how effective this "AI cloaking" technique can be, researchers at SPLX recently ran experiments with sites that served different content to regular Web browsers and to AI crawlers including Atlas and ChatGPT. One demonstration involved a fictional designer from Oregon, whom the researchers named "Zerphina Quortane." The researchers rigged it so human visitors to Quortane's site would see what appeared to be a legitimate bio and portfolio presented on a professional looking Web page with a clean layout. But when an AI agent visited the same URL, the server served up entirely fabricated content that cast the fictional Quortane as a "Notorious Product Saboteur & Questionable Technologist," replete with examples of failed projects and ethical violations."Atlas and other AI tools dutifully reproduce the poisoned narrative describing Zerphina as unreliable, unethical, and unhirable," SPLX researchers Ivan Vlahov and Bastien Eymery wrote in a recent blog post. "No validation. Just confident, authoritative hallucination rooted in manipulated data."Related:Microsoft Backs Massive AI Push in UAE, Raising Security ConcernsIn another experiment SPLX decided to show how easily an AI crawler can be tricked into preferring a wrong job candidate by serving it a different version of a resumé than what a human would see. For the experiment, the researchers created a fake job position with specific candidate evaluation criteria and then set up plausible but fake candidate profiles hosted on different Web pages. For one of the profiles — associated with a fake individual, "Natalie Carter" — the researchers ensured the AI crawler would see a version of Carter's résumé that made her appear significantly more accomplished than the humanly readable version of her bio. Sure enough, when one of the AI crawlers in the study visited the profiles, it ended up ranking Carter ahead of all the other candidates. But when the researchers presented Carter's unmodified résumé — the one humans would see — the crawler put her dead last among the candidates.The experiments show how AI-targeted cloaking can turn a "classic SEO trick into a powerful misinformation weapon," Vlahov and Eymery wrote. Cloaking is a technique that scammers have long used to serve search engine crawlers with different content to what humans see to manipulate search engine results. AI cloaking simply extends the technique to AI crawlers, but with considerably more impact.Related:AI Agents Are Going Rogue: Here's How to Rein Them InAs the researchers explained it, "a single rule on a web server can rewrite how AI systems describe a person, brand, or product, without leaving public traces." With just a few lines of cleverly manipulated content, an attacker could fool hiring tools and compliance systems research models into ingesting false data. What the fake candidate profile experiment showed was how attackers can use AI agent-specific content to skew automated hiring, procurement, or compliance tools. In fact, "any pipeline that trusts web-retrieved inputs is exposed to silent bias," the researchers said.That AI crawlers — at least in their present stage of evolution — don't verify or validate the content they are ingesting makes it easy for attackers to carry out cloaking attacks. "No technical hacking needed. Just content delivery manipulation," Vlahov and Eymery said.Organizations that allow AI systems to make judgement calls based on external data — like shortlisting candidates for a job interview based on their social media profiles — need to pay attention. Instead of implicitly trusting the tool, organizations must implement controls to validate AI-retrieved content against canonical sources. They also need to red team their internal AI workflows for exposure to AI cloaking-like attacks and ask vendors about content provenance and bot authentication, SPLX said.Related:Government Approach to Disrupt Cyber Scams is 'Fragmented'"This is context poisoning, not hacking," the researchers noted. "The manipulation happens at the content-delivery layer, where trust assumptions are weakest."The content manipulation vulnerability that SPLX's research highlighted is just one of many emerging risks tied to the rapid integration of AI tools into daily workflows. Previous research has shown how AI systems are prone to hallucinate false information with confidence, amplify biases from their training data, leak sensitive information through prompt injection attacks, and behave in other unpredictable ways.Jai Vijayan, Contributing WriterJai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year career at Computerworld, Jai also covered a variety of other technology topics, including big data, Hadoop, Internet of Things, e-voting, and data analytics. Prior to Computerworld, Jai covered technology issues for The Economic Times in Bangalore, India. Jai has a Master's degree in Statistics and lives in Naperville, Ill.2025 DigiCert DDoS Biannual ReportDigiCert RADAR - Risk Analysis, Detection & Attack ReconnaissanceThe Total Economic Impact of DigiCert ONEIDC MarketScape: Worldwide Exposure Management 2025 Vendor AssessmentThe Forrester Wave™: Unified Vulnerability Management Solutions, Q3 2025How AI & Autonomous Patching Eliminate Exposure RisksThe Cloud is No Longer Enough: Securing the Modern Digital PerimeterSecuring the Hybrid Workforce: Challenges and SolutionsCybersecurity Outlook 2026Threat Hunting Tools & Techniques for Staying Ahead of Cyber AdversariesYou May Also LikeNov 13, 2025How AI & Autonomous Patching Eliminate Exposure RisksThe Cloud is No Longer Enough: Securing the Modern Digital PerimeterSecuring the Hybrid Workforce: Challenges and SolutionsCybersecurity Outlook 2026Threat Hunting Tools & Techniques for Staying Ahead of Cyber AdversariesPKI Modernization WhitepaperEDR v XDR v MDR- The Cybersecurity ABCs ExplainedHow to Chart a Path to Exposure Management MaturitySecurity Leaders' Guide to Exposure Management StrategyThe NHI Buyers GuideCopyright © 2025 TechTarget, Inc. d/b/a Informa TechTarget. This website is owned and operated by Informa TechTarget, part of a global network that informs, influences and connects the world’s technology buyers and sellers. All copyright resides with them. Informa PLC’s registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. TechTarget, Inc.’s registered office is 275 Grove St. Newton, MA 02466.