NewsGuard’s investigation shows OpenAI’s Sora 2 can generate convincing, news-style deepfake videos for false claims—often in minutes. Watermarks are removable; filters are inconsistent. The disinformation risk is real and operational. Leaders should plan content authentication (C2PA), rapid takedown workflows, and staff training before the next crisis. Treat generative video as both capability and liability. Build defenses now, not after viral damage. #AIForCEO #Deepfakes #TrustAndSafety For more articles like this, register to our weekly newsletter: https://lnkd.in/ejYfVBEQ
OpenAI's Sora 2 can create convincing deepfakes in minutes, posing a significant disinformation risk.
More Relevant Posts
- 
                
      So it’s a month now since DeepDive came out of stealth and the market response has been fantastic. What we’re hearing from #AML compliance teams is that the manual #EnhancedDueDiligence approach of searching, reading, documenting and analysing findings is no longer sustainable and doesn’t allow enough time for all the risk to be uncovered. Here’s our blog on why compliance teams are choosing DeepDive over manual #EDD https://lnkd.in/eyKAqT2u To view or add a comment, sign in 
- 
                  
- 
                
      OpenAI’s Sora 2 can be prompted to generate false claim videos 80% of the time, according to NewsGuard. https://lnkd.in/et_xRtpq #Sora2 #OpenAI #Deepfakes #Disinformation #NewsGuard To view or add a comment, sign in 
- 
                
      Sora 2 from OpenAI has launched and Fredrikson attorney Steve Helland gives advice on some guardrails that should be in place before you create. https://bit.ly/3VHeVS7 To view or add a comment, sign in 
- 
                  
- 
                
      PSA: #agentic browsers are extra vulnerable to bad actors. I don't think any are ready for public use & beyond hype, functionality is limited. The prob is rushed #ai browsers like new OpenAI #atlas is reliance on #prompt #injection prone #llm w/ plenty #security #risk. Definitely not last time will happen. To be fair all other llm vulnerable to adversarial content. The problem is there is no good way to stop them yet and web browsers have a lot of personal info and risks are much bigger To view or add a comment, sign in 
- 
                  
- 
                
      Is your WISP future proof? Cybercriminals are using AI to power their threats, but you can also use AI to create a smarter defense plan in your IRS WISP. Read how to stay compliant in a changing world inside our new blog post http://bit.ly/47gnLg2 #aiandaccounting #FutureProof #Accounting #TaxProfessional #irswisp To view or add a comment, sign in 
- 
                  
- 
                
      𝗧𝗵𝗲 𝗕𝗮𝗱 𝗦𝗶𝗱𝗲 𝗼𝗳 𝗔𝗜-𝗚𝗲𝗻𝗲𝗿𝗮𝘁𝗲𝗱 𝗩𝗶𝗱𝗲𝗼𝘀: 𝗪𝗵𝗮𝘁 𝘁𝗼 𝗞𝗻𝗼𝘄 𝗕𝗲𝗳𝗼𝗿𝗲 𝗨𝘀𝗶𝗻𝗴 𝗢𝗽𝗲𝗻𝗔𝗜’𝘀 𝗦𝗼𝗿𝗮 𝗼𝗿 𝗠𝗲𝘁𝗮’𝘀 𝗩𝗶𝗯𝗲𝘀 In her latest Hippodrome guest editorial, Jurgita Lapienytė, Editor-in-Chief at Cybernews, issues a critical warning about the dark side of AI-generated video tools like OpenAI’s Sora and Meta’s Vibes. While these tools are mesmerizing, they are rapidly accelerating the spread of deepfakes, making it nearly impossible for people to distinguish fact from fiction. Threat actors are already exploiting this technology for highly personalized business email compromise (BEC) scams, identity theft, and fake job offers. Lapienytė details the signs of a fake video and shares essential steps for people to protect themselves from manipulation, misinformation, and financial fraud in the age of synthetic media. #Fintech #CreditUnions #Cybersecurity #Deepfakes #AIVideo #InfoSec #FraudPrevention https://finop.us/dz5lps To view or add a comment, sign in 
- 
                
      🔒 Agentic #AI is here—and it’s rewriting security. Traditional models built for predictable, human-driven actions no longer hold, while autonomous agents improvise, adapt, and can go off-script fast. The new SHIELD framework introduces guardrails—sandboxing, human-in-the-loop, runtime enforcement, and agent identity—to help security teams shift from static defenses to dynamic, agent-centric protection. Read more from F5's Lori MacVittie 👉 http://ms.spr.ly/6047soGAt To view or add a comment, sign in 
- 
                
      Spot the bot. Stop the bot. Outsmart the bot. Websites are really upping their anti-scraping game, with fingerprinting, honeypots, CAPTCHAs, WAFs, you name it. For scrapers, that means the battlefield is no longer just “can I get the data?” but “can I stay undetected while I do it?” We put together a guide that breaks down: ⚡ The top anti-scraping techniques websites use today ⚡ How to recognise the signals before you get flagged ⚡ Smart scraper strategies to dodge bans without brute force ⚡ Why the best scrapers avoid detection instead of fighting every defense It’s practical, responsible, and it might just save your scraper from an early grave. Read the full breakdown here: https://bit.ly/3IfnosH To view or add a comment, sign in 
- 
                  
- 
                
      🔒 Agentic #AI is here—and it’s rewriting security. Traditional models built for predictable, human-driven actions no longer hold, while autonomous agents improvise, adapt, and can go off-script fast. The new SHIELD framework introduces guardrails—sandboxing, human-in-the-loop, runtime enforcement, and agent identity—to help security teams shift from static defenses to dynamic, agent-centric protection. Read more from F5's Lori MacVittie 👉 http://ms.spr.ly/6045sNQlP To view or add a comment, sign in 
- 
                
      OpenAI's Sora 2 is a willing hoax generator. These clips were created with OpenAI’s new Sora 2 text-to-video model. But they advance two pervasive false claims: That Coca-Cola threatened to withdraw as a sponsor of the Super Bowl (Coke was never a sponsor), and that Moldova’s September 2025 parliamentary elections were riddled with election fraud, a claim amplified by Russian influence operations. While the videos contain some visible flaws, it’s easy to see how AI-generated videos will result in more widespread false narratives. In fact, Sora readily produced videos to accompany 80% of the 20 false claims tested in my latest NewsGuard report. My colleague Ines Chomnalez and I generated these videos in minutes. Although OpenAI has implemented some guardrails, such as a floating Sora watermark, we found that it can also be removed in minutes using free online tools. Read the full report here: https://lnkd.in/eTsmUURY To view or add a comment, sign in 
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development