Threat actors will continue to abuse deepfake technology to conduct fraudulent activity, so organizations need to implement strong security protocols – even if it adds to user friction.
TechTarget and Informa Tech’s Digital Business Combine.TechTarget and InformaTogether, we power an unparalleled network of 220+ online properties covering 10,000+ granular topics, serving an audience of 50+ million professionals with original, objective content from trusted sources. We help you gain critical insights and make more informed decisions across your business priorities.News, news analysis, and commentary on the latest trends in cybersecurity technology.Threat actors will continue to abuse deepfake technology to conduct fraudulent activity, so organizations need to implement strong security protocols – even if it adds to user friction.November 6, 2025Generative artificial intelligence (GenAI) tools can produce entertaining content, aid in research, and advance user productivity. But the rapid emergence in the market of new tools such as the new audio and video generation model Sora 2, combined with a lack of regulations, contributes to a rise in disconcerting deepfake risks.OpenAI, the company behind the infamous ChatGPT, launched Sora 2 in September, an update built with "more advanced world simulation capabilities" where users can create eerily realistic videos from text prompts and images. Initially available only to users with an invitation code, the technology is now open to a wide population of users without an invitation. While there are plenty of beneficial use cases for GenAI in terms of promoting creativity, speed, and scale, new tools like Sora 2 pose imminent risks for enterprises as well. Attackers can abuse Sora 2 to enhance social engineering tactics and manipulate even some of the more adept users with convincing deepfakes. OpenAI already had to tighten the guardrails against deepfakes in Sora 2 after the actors’ union SAG-AFTRA lodged a complaint. These tools are advancing faster than any regulations can catch up, warns Ben Colman, CEO and co-founder of AI-detection company Reality Defender. Related:Crash Override Turns to ERM to Combat Visibility Challenges"The use cases tend to evolve the quickest by bad actors, who are honestly the best users of technology and the best product managers," Colman tells Dark Reading. "They will do things a million times, but it only has to be right once or twice to cause mayhem."In the absence of any regulations, and the ability to confirm whether the video is legitimate, the challenges and risks are only limited by imagination, adds Colman. Threat actors could conduct identity fraud, financial fraud, or potentially larger threats that affect the masses, he warns. Each time GenAI tools like Sora receive an update, generated videos become increasingly indistinguishable from reality. Right now, even PhD experts can't spot the difference between reality and a deepfake with the naked eye, says Colman. Sora 2 security risks will affect an array of industries, primarily the legal and healthcare sectors. AI generated evidence continues to pose challenges for lawyers and judges because it’s difficult to distinguish between reality and illusion. And deepfakes could affect healthcare, where many benefits are doled out virtually, including appointments and consultations. "You don't know if you're talking to the right medical practitioner," warns Ashwin Sugavanam, VP of AI and identity analytics at identity verification platform Jumio. "There's no way to authenticate if the person on the other side of the video is a practitioner." Related:Multiple ChatGPT Security Bugs Allow Rampant Data TheftSora 2 updates enable users to create longer videos, but improved voice authenticity is the more concerning upgrade. Colman observes more realistic pregnant pauses and emotional components could trick users, and the update could also make it easier for threat actors to engage in real time during a conversation. "[It's] better in terms of mechanics, how the face and body move, but also perceived emotional intelligence," Colman says.Sora 2 launched with the ability to craft more believable videos than ever before, but GenAI technology will only continue to progress. Google's comparative offering, Veo, is even better than OpenAI's, Sugavanam says.OpenAI did implement safety mechanisms including a watermark, to make it clear the video was generated with AI. However, the guardrail is insufficient because it is something threat actors could work around, says Sugavanam. If they mastered Sora 2 capabilities, they could master removing watermarks, he adds. That leaves little evidence to tell users it's a fake video. Related:APT 'Bronze Butler' Exploits Zero-Day to Root Japan Orgs"Authenticity of the video is something we need to stay ahead of," Sugavanam urges. "In my research, it requires a multi-prong approach." Alongside video approval, organizations should implement additional authentication factors including likeness checks and device location. These can capture the device, check the location, and ensure that it's associated with where the person should be located. It's important that the checks are random in nature, so threat actors cannot replicate them, advises Sugavanam. He also recommends examining the virtual background. If threat actors aren't careful with the prompts they use to generate the videos, they could produce the same background. Using the same background across multiple identities is a clear indication it is the same person posing as multiple individuals. Multifactor authentication for identity verification is valuable, in light of the deepfake phenomenon, but the strategy lags among many organizations. Verifying users and connecting them and their devices is important, but those users want less friction, and that is a tough balance, he says. "The impetus to do it is always the problem," Sugavanam reveals. "It's a lot of effort required when you want to put MFA in place." Security risks aren't unique to Sora 2. Colman emphasized that it is a global challenge, because the same concerns affect every GenAI platform. It actually speaks to a broader industrywide trust and safety problem in the wake of the AI boom, says Scott Steinhardt, head of communications at Reality Defender. He highlighted its impact on the job market, where it’s becoming more difficult for employers to trust that new employees they interview over Zoom are legitimate – or a deepfake. AI will improve the world, but in edge cases with GenAI, the dangers and risks are exponential, stresses Colman. He and Steinhardt are hopeful that in the next 12-18 months initial regulations will roll out to curb security risks. "This is this year's sort of big concern," Steinhardt says, “because deepfakes are an ‘everyone’ problem and they require various solutions.” Arielle WaldmanFeatures Writer, Dark ReadingArielle spent the last decade working as a reporter, transitioning from human interest stories to covering all things cybersecurity related in 2020. Now, as a features writer for Dark Reading, she delves into the security problems enterprises face daily, hoping to provide context and actionable steps. She previously lived in Florida where she wrote for the Tampa Bay Times before returning to Boston where her cybersecurity career took off at SearchSecurity. When she's not writing about cybersecurity, she pursues personal projects that include a mystery novel and poetry collection. 2025 DigiCert DDoS Biannual ReportDigiCert RADAR - Risk Analysis, Detection & Attack ReconnaissanceThe Total Economic Impact of DigiCert ONEIDC MarketScape: Worldwide Exposure Management 2025 Vendor AssessmentThe Forrester Wave™: Unified Vulnerability Management Solutions, Q3 2025How AI & Autonomous Patching Eliminate Exposure RisksThe Cloud is No Longer Enough: Securing the Modern Digital PerimeterSecuring the Hybrid Workforce: Challenges and SolutionsCybersecurity Outlook 2026Threat Hunting Tools & Techniques for Staying Ahead of Cyber AdversariesYou May Also LikeFEATUREDCheck out the Black Hat USA Conference Guide for more coverage and intel from — and about — the show.AI Security Agents Get Persona MakeoversOperational Technology Security Poses Inherent Risks for ManufacturersAI App Spending Report: Where Are the Security Tools?An 18-Year-Old Codebase Left Smart Buildings Wide OpenCopyright © 2025 TechTarget, Inc. d/b/a Informa TechTarget. This website is owned and operated by Informa TechTarget, part of a global network that informs, influences and connects the world’s technology buyers and sellers. All copyright resides with them. Informa PLC’s registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. TechTarget, Inc.’s registered office is 275 Grove St. Newton, MA 02466.