SaaS sprawl, shadow IT, & shadow AI — the silent security risk in your company

SaaS sprawl, shadow IT, & shadow AI — the silent security risk in your company

Key points:

  • Shadow IT exists in almost every organization — often unnoticed
  • Shadow IT & AI often lead to wasted money and sensitive data leaks
  • Educating employees and using digital security tools is the best defense

Using your own software at work to get the job done seems efficient. A quick file-sharing app here, a personal ChatGPT account there. No big deal, right?

Not really. Behind the scenes, this type of behavior creates blind spots that pose security risks for your company. According to a report by Cequence, 68% of organizations have exposed shadow APIs (unmanaged application programming interfaces) due to using non-approved SaaS applications.

In this CyberFocus, we’ll break down how and why  SaaS sprawl, shadow IT, and shadow AI can quietly open the door to data leaks and compliance violations, and how to protect yourself from these dangers.

SaaS sprawl, shadow IT, and shadow AI: What’s the difference?

SaaS sprawl

It happens when SaaS (software-as-a-service) tools multiply across an organization without centralized supervision. It often starts innocently — someone needs a better way to track tasks, share files, or manage customers. However, over time, the company ends up with redundant subscriptions and untracked data.

Shadow IT

It refers to any hardware, software, or cloud services used without knowledge of the IT department, including SaaS sprawl. According to IBM, 41% of employees acquired, modified, or created technology without their IT or IS team’s knowledge. That creates major blind spots: IT department can’t secure what it doesn’t know exists.

Online services used by employees not approved by IT team as of 2020

Shadow AI

The newest frontier, it describes the use of AI tools — like ChatGPT, Midjourney, or coding copilots — without IT oversight. As per Software AG research, half of office workers are using Shadow AI. While these tools can boost productivity, they can also pose major risks around data privacy, intellectual property, and regulatory compliance.

“Many generative AI tools require users to input raw data — text, images, code, customer information — to generate useful outputs. But without proper vetting, there's no visibility into where the data is stored, how it’s processed, and who has access to it,” said Karolis Zdanevičius, Senior Cyber Security Analyst at NordVPN.

How big is the risk of shadow IT & AI?

Even though employees are using shadow IT & AI for efficiency, the cons can greatly outweigh the pros. Here’s why.

Security vulnerabilities

If the IT department doesn’t supervise adopted tools, key protections like strong authentication, data encryption, or proper access controls can be overlooked. This creates gaps that attackers can exploit. According to IBM’s report, 45% of data breaches in 2022 occurred in the cloud, making unregulated online services a major security risk.

“Many SaaS tools operate over HTTPS, making their traffic appear harmless or indistinguishable from approved web services. This allows them to bypass firewalls that don’t block unknown domains, web proxies that don’t inspect encrypted traffic, and even VPN requirements if accessed from personal laptops or smartphones,” warns K. Zdanevičius.

Data leaks

When employees use tools like personal cloud storage (Google Drive, Dropbox, etc.) without IT approval, the company loses control over who can access those files. They can be seen by threat actors — like in Okta’s case, where an employee used a personal account on the company’s device and gave access permissions to an unapproved online service. It resulted in a hijacker gaining entry to Okta’s customer support system and conducting cyber attacks against the customers.

With AI tools, employees may accidentally type in confidential data like code, client details, or financials. That data can then be stored on the AI provider’s servers and exposed to third parties in future AI outputs — that’s exactly what happened in the Samsung incident, which led the company to ban ChatGPT use altogether.

Legal risk

Unapproved tools can violate regulations like GDPR, HIPAA, or SOC 2. Shadow AI is especially risky here — AI platforms may process data in ways that violate privacy laws or export intellectual property.

“Shadow AI tools often operate outside formal vendor management processes, meaning they aren't reviewed for compliance,” noted K. Zdanevičius. “This opens up the company to non-compliance fines, breach notification liabilities, and regulatory scrutiny — even if the tool was used by a single employee.”

Financial waste

SaaS sprawl leads to redundant tools, unused licenses, and costs that aren’t captured in the IT budget. It’s common for companies to discover they're paying for multiple apps that do the same thing or aren’t being used anymore. Gartner studies have found that 30% to 40% of IT spending in enterprises is shadow IT — an astounding scope of wasted funds.

Poor maintenance and traceability

When employees use tools without approval, there is no onboarding or offboarding. This can lead to lost access and unmanaged data without anyone left responsible for maintaining the processes, especially if the employee leaves the company.

Also, there is often no audit trail for the processes, making it impossible to investigate issues or prove due diligence.

6 actions to clear shadow threats in your company

If you want to regulate what’s going on in your company, the best tactic is educating employees and — most importantly — giving them clear, easy solutions for what they’re trying to achieve.

According to K. Zdanevičius, it’s important to educate first. “Most employees don’t mean harm — they just want to work smarter. Ongoing training about risks, especially around AI tools and unvetted software, goes a long way.”

Here are key strategies to help prevent these risks:

  1. Make IT involvement easy. The major reason people go around IT is that the official process feels too slow or restrictive. Involve the IT team not as an obstacle, but as a partner.
  2. Use a SaaS management platform. Management tools help IT teams track software across the company — even the tools no one reported. You’ll get visibility into subscriptions, usage patterns, and redundant apps.
  3. Set policies for software use. Don’t just block tools — people will use them anyway. Instead, create guidelines for safe use: define what types of data can and can’t be entered, and which platforms are trusted.
  4. Implement security platforms. Cybersecurity tools like NordLayer can provide protection against unauthorized access and data breaches. Features like Zero Trust Network Access (ZTNA), Secure Web Gateway (SWG), and Firewall-as-a-Service (FWaaS) ensure that only authorized users can see sensitive company resources.
  5. Regularly update security measures. The technological landscape is ever-evolving. Routinely assessing policies and tools can help avoid any new threats that emerge.
  6. Manage AI usage. If your company is experimenting with multiple AI models (like OpenAI, Claude, or Gemini), AI orchestration platforms like nexos.ai make it easier and safer to manage. It connects over 200 AI models through one API, giving you centralized access, governance, and observability.

The importance of bringing shadow tech into the light

The rise of SaaS sprawl, shadow IT, and shadow AI comes from people’s need to move fast and solve problems. But without proper oversight, you might have much more serious problems to deal with — like compromised security, violated compliance, and costly data breaches.

With the right mix of visibility, education, and cybersecurity tools, it’s possible to maintain efficiency without putting your data or reputation on the line.

Want to take control of your online security, privacy, and data? Make sure to subscribe to our bi-weekly newsletter for more.

Ivan Karabaliev

Founder and Director of Sort My Events

5mo

Hi Karolis and team, glad you touched the topic about AI tools used for productivity, yet they can store and resurface sensitive company data. Apart from Banning the use of AI, how can one control the urge and desire to use the common AI tools to boost productivity without imposing a security risk?

Like
Reply
Bohdan Lukianets

I help Deploy Agentic AI Safely and at Scale • Enterprise GenAI Architecture with Orchestration and MLOps, Security & Governance | Microsoft AI Cloud Partner

5mo

Thanks Nord Security for providing a clear and structured explanation. Check, there seems to be an error in the IBM URL: Questions-Around-Workforce-and-Cultur e-Persist 🫡

Like
Reply

To view or add a comment, sign in

More articles by Nord Security

Others also viewed

Explore content categories