The pressure to rapidly adopt AI to defend against AI-powered threats - while avoiding the pitfalls of moving too fast - is the paradox that’s keeping security leaders up at night. Over on VentureBeat, our own Michael Fanning and Tanya Faddoul share why rigorous testing, maintaining human oversight, and being realistic about what AI can and cannot do are essential to successful AI security implementation. Dive in here. ⬇️
How to adopt AI for security without falling into the trap of rushing it
More Relevant Posts
-
I had the chance to partner with our CISO, Michael Fanning to explore a paradox that’s keeping security leaders up at night - the race to adopt AI fast enough to defend against AI powered threats, while staying grounded enough to avoid the risks of moving too quickly. Through several conversations with CISOs and industry leaders, we unpack why speed alone isn’t enough when it comes to AI and security; and what it really means to build digital resilience in this next chapter. Big thanks to Mike, Sancha N. and JK Lialias for bringing such sharp insight and partnership on this. 🔗 Read the full article on VentureBeat and let me know what you think: https://lnkd.in/gdiEyFen
The pressure to rapidly adopt AI to defend against AI-powered threats - while avoiding the pitfalls of moving too fast - is the paradox that’s keeping security leaders up at night. Over on VentureBeat, our own Michael Fanning and Tanya Faddoul share why rigorous testing, maintaining human oversight, and being realistic about what AI can and cannot do are essential to successful AI security implementation. Dive in here. ⬇️
To view or add a comment, sign in
-
As organisations rush to integrate generative AI, many are overlooking a new and dangerous blind spots that it has created. At the Infosecurity Magazine Enterprise Risk Virtual Summit on 12 November, join our panel of experts for an inside look at the real risks behind AI adoption. What you’ll learn: • The critical differences between AI Security and AI Safety, and why both matter • Real-world attacker tactics, from prompt injection to sensitive data exposure • Actionable guidance for building a proactive AI security program 🎟️ Register free: https://lnkd.in/eKFAQSwB 🎙️Gisela Hinojosa, veteran pentester and Research Lead at Cobalt 📅 12 November 2025
To view or add a comment, sign in
-
-
As organizations adopt AI technologies, safety must become a focal point. Explore how you can ensure that AI tools are safe for use. Read more: https://wix.to/ofBOwPJ #AI #Safety #Kafico #AIProcurement #AIethics
To view or add a comment, sign in
-
Last week I got to hear different AI Security leaders talk about the current state of the landscape and risks impacting the enterprise. Peter McKay of Snyk put it simply: We speed because we trust our brakes. What are we trusting when it comes to AI initiatives if security isn't a major consideration into the Enterprise's AI Strategy? 🏎️ 🏎️ 🏎️ #AIStrategy #Security #InstallBrakes #DontPumpTheBrakes
To view or add a comment, sign in
-
Over the past few weeks, my team and I have been deeply focused on AI security and looking for practical solutions. I recently watched the Stanford Online webinar “Building Trustworthy AI: Navigating Security in the Agentic Era,” and it was incredibly insightful. Highly recommend it to anyone working with AI or responsible for securing AI systems. Huge thanks to @Stanford Stanford Online and speakers Neil Daswani and Vinay Rao for the depth of expertise shared. Key takeaways: 🔏 Trustworthy AI = Robustness, fairness, privacy, transparency, and accountability. 🛂 AI security requires cross-functional collaboration across engineering, security, and risk teams. 🩹 New threats like prompt injection and autonomous agents demand proactive defenses. ⚖️ Balance speed with safety—innovation is only valuable if it’s secure. Here’s the link: 👉 https://lnkd.in/gRZapskr #AI #CyberSecurity #TrustworthyAI #AISafety #Stanford #StanfordOnline #PromptInjection #AIAlignment #AgenticAI #EnterpriseSecurity
Stanford Webinar - Building Trustworthy AI: Navigating Security in the Agentic Era
https://www.youtube.com/
To view or add a comment, sign in
-
➡️ 𝗖𝗼𝗻𝘁𝗶𝗻𝘂𝗶𝗻𝗴 𝗼𝗻 𝗼𝘂𝗿 𝗰𝗼𝗻𝘃𝗲𝗿𝘀𝗮𝘁𝗶𝗼𝗻 𝗳𝗿𝗼𝗺 𝘁𝗵𝗶𝘀 𝗽𝗮𝘀𝘁 𝘄𝗲𝗲𝗸 𝗼𝗻 𝗔𝗜 𝗮𝗻𝗱 𝗰𝘆𝗯𝗲𝗿𝘀𝗲𝗰𝘂𝗿𝗶𝘁𝘆 – two fields that are increasingly intertwined. In this Wall Street Journal interview, Zscaler 𝗖𝗘𝗢 Jay Chaudhry with Raakhee Mirchandani discusses how the rise of 𝗔𝗜 𝗮𝗴𝗲𝗻𝘁𝘀 is redefining digital risk. It’s a helpful listen for anyone exploring 𝗔𝗜 𝗳𝘂𝗻𝗱𝗮𝗺𝗲𝗻𝘁𝗮𝗹𝘀. As 𝗔𝗜 becomes more autonomous, 𝗰𝘆𝗯𝗲𝗿𝘀𝗲𝗰𝘂𝗿𝗶𝘁𝘆 isn’t just a safeguard – it’s a 𝗰𝗼𝗿𝗲 𝗽𝗿𝗶𝗻𝗰𝗶𝗽𝗹𝗲 𝗼𝗳 𝗔𝗜 𝗹𝗶𝘁𝗲𝗿𝗮𝗰𝘆. 𝗞𝗲𝘆 𝗾𝘂𝗲𝘀𝘁𝗶𝗼𝗻𝘀 𝘁𝗼 𝗰𝗼𝗻𝘀𝗶𝗱𝗲𝗿: • How do we protect the 𝗱𝗮𝘁𝗮 𝗮𝗻𝗱 𝗺𝗼𝗱𝗲𝗹𝘀 that power intelligent systems? • How do we detect when 𝗔𝗜 𝗯𝗲𝗵𝗮𝘃𝗶𝗼𝗿𝘀 𝘀𝗵𝗶𝗳𝘁, whether through anomalies or adversarial prompts? • How can frameworks like 𝗭𝗲𝗿𝗼-𝗧𝗿𝘂𝘀𝘁 ensure 𝗔𝗜 𝗲𝘃𝗼𝗹𝘃𝗲𝘀 𝘀𝗮𝗳𝗲𝗹𝘆 𝗮𝗻𝗱 𝗿𝗲𝘀𝗽𝗼𝗻𝘀𝗶𝗯𝗹𝘆? At Women Leading AI, we view these as foundational to building 𝘁𝗿𝘂𝘀𝘁𝘄𝗼𝗿𝘁𝗵𝘆, 𝗿𝗲𝘀𝗶𝗹𝗶𝗲𝗻𝘁, 𝗮𝗻𝗱 𝗵𝘂𝗺𝗮𝗻-𝗮𝗹𝗶𝗴𝗻𝗲𝗱 𝘀𝘆𝘀𝘁𝗲𝗺𝘀. 🎧 Take a listen below. #AIFundamentals #Cybersecurity #AI #ZeroTrust #WomenLeadingAI #AIandGovernance https://lnkd.in/eHH2haZT
Exploring Agentic AI: Innovation Meets Security
wsj.com
To view or add a comment, sign in
-
From a cybersecurity perspective, Agentic AI is fundamentally broken, particularly if they run on data that has yet to be vetted, use tools that has not been robustly audited, and make decisions in environments where changes are constantly changing. The authors state that building integrity is going to be crucial in every chain, but here lies the problem, how can users ensure integrity on public LLMs that they have no control of? Leaders are starting to realise that without trust, the use of AI is going to be a problem ... and without building integrity into AI workflows, there will never be trust. https://lnkd.in/g-xwbYd6
To view or add a comment, sign in
-
The Missing Piece in AI Safety? 🧩 The Commercial Ecosystem. There's a growing vibrant AI Safety landscape: • Academics > Research • Think Tanks > Policy • Testers > Model Audits • Educators > Public Awareness • Insiders > Frontier Lab Internal Safety (often under-resourced) But what's lagging is a robust, well-funded commercial AI safety ecosystem—think of the prototype as the cybersecurity industry. We're just starting to see players like Irregular AI emerge. Some say this is "just more cyber", I disagree. While the need for a defensive posture is similar, the risks are unique. Caveat: A commercial ecosystem brings much-needed resources, but it also carries the risk of misalignment: profit-seeking can divert companies from their core safety mission. To my mind, this commercial layer is the lagging piece in our overall defensive strategy. We need to bridge this gap to secure the future of AI. #AISafety #AIEthiks #FutureofAI #CommercialSafety
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development
Such an important discussion. At Elevate Côte d’Ivoire, we advocate for ethical innovation-using AI not just to advance security, but to build trust and opportunity.