Everyone uses AI at work. Here’s how companies can keep data safe

Companies across industries are encouraging their employees to use AI tools at work. At the same time, their workers are often very keen to get the most out of AI-generated chatbots like ChatGPT. So far, everyone’s on the same page, right?
There’s just one hurdle: How do companies protect sensitive corporate data from being exposed to the same tools that are supposed to boost productivity and ROI? After all, it’s very tempting to upload financial information, client data, proprietary code, or internal documents to your favorite chatbot or AI coding tool, in order to get the quick results you want (or that your boss or colleague might demand). In fact, a new study by data security firm Varonis finds that shadow AI – unlicensed generative AI applications – poses a significant threat to data security, with tools that can bypass corporate governance and IT monitoring, leading to potential data leaks. The study found that almost all companies have employees using unauthorized applications, and nearly half have employees using AI applications considered high-risk.
For information security leaders, one of the main challenges is educating employees about the risks and what the company requires. They must ensure that employees understand the types of data the organization deals with – from company data such as internal documents, strategic plans and financial records, to customer data such as names, email addresses, payment details and usage patterns. It is also important to clarify how each type of data is classified, for example, whether it is public, internal only, confidential, or highly restricted. Once this foundation is established, clear policies and access limits must be put in place to protect that data accordingly.
Striking a balance between encouraging the use of artificial intelligence and building protection barriers
“What we’re facing is not a technology problem, but a user challenge,” said James Robinson, chief information security officer at data security firm Netskope. The goal, he explained, is to ensure that employees use generative AI tools safely — without discouraging them from adopting approved technologies.
“We need to understand what the company is trying to achieve,” he added. Instead of just telling employees they’re doing something wrong, security teams should work to understand how people use tools, to make sure policies are appropriate — or whether they need to be modified to allow employees to share information appropriately.
Jacob DePriest, chief information security officer at password protection provider 1Password, agrees, saying his company tries to strike a balance with its policies — to encourage the use of AI as well as education so that the right guardrails are in place.
Sometimes that means making adjustments. For example, the company issued a policy on the acceptable use of artificial intelligence last year, as part of the company’s annual security training. “In general, the theme is about: ‘Please use AI responsibly; Please focus on approved tools; Here are some areas of unacceptable use.” But the way it was written made many employees feel overly cautious, he said.
“It’s a good problem to have, but CISOs can’t focus exclusively on security,” he said. “We have to understand the business objectives and then help the company achieve the business objectives and security outcomes as well. And I think AI technology in the last decade has highlighted the need for that balance. So we’ve really tried to approach this in tandem between security and enabling productivity.”
Banning AI tools to avoid misuse doesn’t work
But companies that think blocking certain tools is the answer should think again. Brooke Johnson, senior vice president of human resources and security at Ivanti, said her company found that among people who use generative AI at work, nearly a third keep their AI use completely hidden from management. “They share company data with unvetted systems, run requests through platforms with unclear data policies, and potentially expose sensitive information,” she said in a message.
The instinct to ban some tools is understandable but misguided, she said. “You don’t want employees to get better at hiding the use of AI, you want them to be transparent so it can be monitored and regulated,” she explained. This means accepting that the use of AI occurs regardless of policy, and properly evaluating which AI platforms meet your security standards.
“Educate teams about specific risks without vague warnings,” she said. She suggested helping them understand why certain guardrails exist, while emphasizing that they are not punitive. “It’s about ensuring they can do their jobs efficiently, effectively and safely.”
Agent AI will create new challenges for data security
Do you think securing data in the age of AI is complicated now? AI agents will up the ante, DePriest said.
“For these agents to work effectively, they need access to credentials, tokens and identities, and they can act on behalf of an individual – they may have their own identity,” he said. “For example, we don’t want to facilitate a situation where an employee might cede decision-making authority to an AI agent, as that could impact a human.” He explained that organizations want tools to help facilitate faster learning and collect data more quickly, but in the end, humans need to be able to make critical decisions.
Whether it’s the AI agents of the future or the generative AI tools of today, striking the right balance between enabling productivity gains and doing so in a safe and responsible way can be difficult. But experts say every company faces the same challenge, and facing it will be the best way to ride the AI wave. The risks are real, but with the right mix of education, transparency, and oversight, companies can harness the power of AI — without handing over the keys to their kingdom.


Post Comment