Security programs trust AI data files, but they shouldn't: they can conceal malware more stealthily than most file types.
TechTarget and Informa Tech’s Digital Business Combine.TechTarget and InformaTogether, we power an unparalleled network of 220+ online properties covering 10,000+ granular topics, serving an audience of 50+ million professionals with original, objective content from trusted sources. We help you gain critical insights and make more informed decisions across your business priorities.Security programs trust AI data files, but they shouldn't: they can conceal malware more stealthily than most file types.October 30, 2025A researcher has demonstrated that Windows' native artificial intelligence (AI) stack can serve as a vector for malware delivery.In a year where clever and complex prompt injection techniques have been growing on trees, security researcher hxr1 identified a much more traditional way of weaponizing rampant AI. In a proof-of-concept (PoC) shared exclusively with Dark Reading, he described a living-off-the-land attack (LotL) using trusted files from the Open Neural Network Exchange (ONNX) to bypass security engines."All those different living off the land binaries [we're familiar with] have been there now for so many years," hxr1 says. "They're old and all well known, and most of the [endpoint detection and response systems, or EDRs] and antivirus [filters] are good enough to capture the kinds of attacks using them. So attackers always look for new living-off-the-land binaries so that they can bypass these existing defenses and get their payloads on the targeted system. That's where this ONNX model comes into the picture."Cybersecurity programs are only as effective as cybersecurity developers design them to be. They might catch undue volumes of data exfiltrating from a network, or a foreign .exe file that starts running, because these are known indicators of suspicious behavior. They won't likely know it, though, if malware arrives on a system in a form they've never seen before.Related:Ollama, Nvidia Flaws Put AI Infrastructure at RiskThat's what makes AI such a headache. As new systems, software, and workflows tack on AI capabilities, they open up new, unseen vectors through which cyberattacks may be transmitted.For example, since 2018, the Windows operating system has been steadily adding functionality that allows applications to perform AI inference locally, without having to connect to a cloud service. Windows Hello, Photos, and Office applications all use inbuilt AI to perform facial recognition, object detection, and productivity functions, respectively. They do so by calling the Windows Machine Learning (ML) application programming interface (API), which loads ML models in the form of ONNX files.Windows and security programs inherently trust ONNX files. Why wouldn't they? Malware comes in EXEs, PDFs, and other formats, but to date no threat actors in the wild have demonstrated that they intend to, or can, weaponize neural networks for malicious ends. It's certainly possible, though, by any number of means.An easy method for poisoning a neural network would be to plant a malicious payload in its metadata. The tradeoff would be that this malware would sit in plaintext, much easier for a security program to incidentally notice.Related:Critical Site Takeover Flaw Affects 400K WordPress SitesIt would be more difficult but more subtle to embed malware piecemeal among the named components of the model — nodes, inputs, and outputs. Or an attacker could use advanced steganography to conceal a payload within the very weights that comprise the neural network.All three methods work, as long as you have a loader nearby that can call relevant Windows APIs to unpack it, reconstruct it in memory, and run it. And both methods are extremely stealthy. Trying to reconstruct a fragmented payload from a neural network would be like trying to reconstruct a needle from bits of it spread through a haystack.Let's say an attacker manages to sneak malware into an ONNX file. They then have a variety of options for how they might transmit it to a victim. A phishing email would do, carrying an ONNX file and loader. Or an attacker could take advantage of the widespread trust users have in AI software across the board, by publishing a malicious model on an open source (OSS) platform like Hugging Face.But there's a crucial difference between a PDF and an ONNX in a phishing email, or a software download from Hugging Face versus GitHub.Related:Kimsuky Debuts HTTPTroy Backdoor Against South Korea Users"When you download a GitHub repo, that'll always be, like, a Python script, or .NET code, or something like that. And EDR engines are good enough to scan those types of files," hxr1 notes.By contrast, when a security program sees a process loading an ONNX file, it will read it as benign AI inference. Doubly so because of how difficult it would be to find a payload in such a complex, binary file.Triply so because the ONNX file is supposed to just contain data, so "these models don't have to be signed binaries. You can download any models, you can use native libraries to extract them, and there are no validations or signature checks happening there," hxr1 points out. They'll skirt right by analysis tools focused on executable behavior.Quadruply so because of how the file gets loaded and executed, hxr1 says. "You can hide a payload in any file format. Like, you can put it in an audio file. But how are you going to extract it? What API are you going to use? Are EDRs good enough to monitor your suspicious APIs as they retrieve, read the file, and extract data from the file?" That's why his PoC worked so well — the dynamic link libraries (DLL) that operate on ONNX files are signed by Microsoft and built into Windows. So when a malicious ONNX is loaded on a target's system, all any security program will see is trusted Windows DLLs reading model data to perform an AI task.From hxr1's perspective, there isn't any kind of issue with how Windows AI is working. Rather, the cybersecurity community at large needs to adjust. Security tools need to be reworked to look for threats couched in AI files."EDRs should monitor who loads them, what has been extracted, where the extracted data is being passed, and those paths need to be monitored," he suggests. "On top of that we have static analyzers, like YARA rules, that we can use to monitor for suspicious strings in data. Also, we can use application controls like AppLocker. All those things we could do as part of a mitigation and detection strategy."If nothing else, he says, "the main goal here is to prove that these models are not trustworthy. Don't blindly trust any model sitting on the Internet."Nate Nelson, Contributing WriterNate Nelson is a writer based in New York City. He formerly worked as a reporter at Threatpost, and wrote "Malicious Life," an award-winning Top 20 tech podcast on Apple and Spotify. Outside of Dark Reading, he also co-hosts "The Industrial Security Podcast."DigiCert RADAR - Risk Analysis, Detection & Attack Reconnaissance2025 DigiCert DDoS Biannual ReportThe Total Economic Impact of DigiCert ONEIDC MarketScape: Worldwide Exposure Management 2025 Vendor AssessmentThe Forrester Wave™: Unified Vulnerability Management Solutions, Q3 2025How AI & Autonomous Patching Eliminate Exposure RisksThe Cloud is No Longer Enough: Securing the Modern Digital PerimeterSecuring the Hybrid Workforce: Challenges and SolutionsCybersecurity Outlook 2026Threat Hunting Tools & Techniques for Staying Ahead of Cyber AdversariesYou May Also LikeNov 13, 2025How AI & Autonomous Patching Eliminate Exposure RisksThe Cloud is No Longer Enough: Securing the Modern Digital PerimeterSecuring the Hybrid Workforce: Challenges and SolutionsCybersecurity Outlook 2026Threat Hunting Tools & Techniques for Staying Ahead of Cyber AdversariesPKI Modernization WhitepaperEDR v XDR v MDR- The Cybersecurity ABCs ExplainedHow to Chart a Path to Exposure Management MaturitySecurity Leaders' Guide to Exposure Management StrategyThe NHI Buyers GuideCopyright © 2025 TechTarget, Inc. d/b/a Informa TechTarget. This website is owned and operated by Informa TechTarget, part of a global network that informs, influences and connects the world’s technology buyers and sellers. All copyright resides with them. Informa PLC’s registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. TechTarget, Inc.’s registered office is 275 Grove St. Newton, MA 02466.