Blog
Generative AI Survey Shows High Adoption, Low Security Awareness
Michael Clark
October 17, 2023
There’s been a lot of speculation about the future of generative AI tools, like ChatGPT and Google Bard, as is natural when new technologies emerge. But at ExtraHop, we wanted data. So early this fall, we conducted a study of over 1,200 security and IT leaders from around the world to see how they were handling generative AI. The results surprised us.
If your organization has banned the use of generative AI tools, or is curious about them, read on for highlights, then check out the full report for additional insights, along with eight recommendations for implementing generative AI tools in your organization safely.
Security Isn’t the Top Priority for Leaders
When asked what concerned them about generative AI tools, respondents’ top choice was receiving inaccurate or nonsensical responses (40%), rather than security-centric issues. Concerns like exposure of employee or customer personally identifiable information (PII) (36%), exposure of trade secrets (33%), and financial loss (25%) ranked lower.
Basic Security Hygiene Is Lacking
The vast majority of respondents (82%) were somewhat or very confident that their current security stack can protect them against threats from generative AI tools, yet less than half of organizations had invested in technology to monitor employee use of generative AI, leaving them blind to potential data loss. Moreover, only 46% have security policies in place governing what company data is or is not safe to share with these tools.
The disconnect between respondents’ confidence in their ability to protect against threats despite their lack of monitoring technology and security policies suggested to ExtraHop that many respondents may be relying on next-generation firewalls (NGFW) as a security control to block traffic to generative AI domains. The issue with relying on NGFWs to block access to generative AI tools is that NGFWs don’t fit nicely into security investigation workflows without a lot of work and that end users can always find a workaround for blocklists.
Generative AI Bans Don’t Work
Nearly a third of respondents said their organizations have banned the use of generative AI tools, a similar proportion to how many were very confident in their ability to protect against AI threats (36%). But if those bans were effective, we’d expect to see a similar number of organizations where employees never use AI tools. That wasn’t the case. Only 5% say employees never use them. Clearly, employees are finding ways around bans, and once data is submitted to these tools, there’s no getting it back.
Organizations Want More Guidance – Especially From the Government
Almost three quarters of respondents have invested or are planning to invest in generative AI protections or security measures this year. But leaders want some external guidance to ensure these investments will pay off. When asked their opinion on how involved the government should be in AI regulations, 60% said the government should set clear regulations that businesses must follow.
Generative AI Is Here to Stay
With 73% of respondents saying employees use generative AI tools either sometimes or frequently, it’s clear that this is no passing fad. At ExtraHop, we believe that these tools can deliver incredible gains in productivity, that is, if organizations manage the risks associated with AI. We recommend establishing policies for safe use, investing in tools that grant you visibility into employee use of AI, and providing training for employees. For more tips, and to see how your organization compares to others in your region or industry, grab a copy of the report here.
Discover more