As artificial intelligence (AI) tools become more advanced, employees across industries are increasingly using them at work—often without the approval of their companies’ IT departments.
“It’s easier to get forgiveness than permission,” says John, a software engineer at a financial services technology firm, who regularly uses unauthorized AI tools to enhance his productivity. His company provides GitHub Copilot for AI-assisted coding, but he prefers Cursor, a different AI coding assistant.
“It’s largely a glorified autocomplete, but it’s very good,” he says. “It completes 15 lines at a time, and then you look over it and say, ‘yes, that’s what I would’ve typed.’ It frees you up. You feel more fluent.”
John is one of many workers engaging in what experts call “shadow AI”—the use of unapproved AI tools in the workplace. A recent survey by Software AG found that 50% of knowledge workers use personal AI tools on the job. Some do so because their IT departments don’t offer AI solutions, while others simply prefer different tools.
A Growing Trend with Risks
Peter (not his real name), a product manager at a data storage company, also bypasses company policy to use AI. His employer offers Google Gemini AI, but external AI tools are strictly banned. Still, Peter relies on ChatGPT via search tool Kagi to analyze competitor videos and generate insights quickly.
“The AI is not so much giving you answers as giving you a sparring partner,” he explains. “As a product manager, you have a lot of responsibility but limited outlets to discuss strategy. These tools allow that in an unlimited capacity.”
He estimates that his use of AI makes him about 30% more productive, the equivalent of having an additional employee working for free.
Despite its advantages, shadow AI poses serious security risks. According to Harmonic Security, which tracks AI tool usage in businesses, over 5,000 AI apps are currently being used without IT approval. Many AI models train on user inputs, raising concerns about sensitive corporate data being stored and potentially exposed.
“It’s pretty hard to extract data from these AI tools,” says Alastair Paterson, CEO of Harmonic Security. “But the bigger issue is that firms have no control or visibility over where their data is going.”
A New Approach to AI Governance
Rather than fighting shadow AI, some companies are embracing the shift.
Trimble, a software and hardware provider, developed Trimble Assistant, an internal AI tool built on the same technology as ChatGPT. Employees can use it for product development, customer support, and market research, ensuring AI is used securely within the company’s ecosystem.
“I encourage people to explore AI tools in their personal lives,” says Karoliina Torttila, Trimble’s director of AI. “But in a professional setting, there must be safeguards.”
She believes employees’ experiences with AI outside of work can help shape corporate policies, but companies need to maintain an ongoing dialogue about which tools serve them best.
‘Welcome to the Club’
Experts suggest companies take a pragmatic approach to shadow AI rather than attempting to ban it outright.
Simon Haighton-Williams, CEO of The Adaptavist Group, advises businesses to understand why employees use unauthorized AI and to find ways to integrate it safely into company workflows.
“Welcome to the club. Everybody has shadow AI,” he says. “Be patient, figure out what people are using and why, and manage it rather than shutting it down. You don’t want to be the company that gets left behind.”
With AI technology rapidly evolving, rigid policies may no longer be sustainable. Instead, organizations may need to adapt and embrace AI while ensuring security and compliance remain a top priority.