‘Shadow AI’ on the Rise as Employees Use Unauthorized Tools at Work
As artificial intelligence (AI) tools become more advanced, employees across industries are increasingly using them at work—often without the approval of their companies’ IT departments.
“It’s easier to get forgiveness than permission,” says John, a software engineer at a financial services technology firm, who regularly uses unauthorized AI tools to enhance his productivity. His company provides GitHub Copilot for AI-assisted coding, but he prefers Cursor, a different AI coding assistant.
“It’s largely a glorified autocomplete, but it’s very good,” he says. “It completes 15 lines at a time, and then you look over it and say, ‘yes, that’s what I would’ve typed.’ It frees you up. You feel more fluent.”
John is one of many workers engaging in what experts call “shadow AI”—the use of unapproved AI tools in the workplace. A recent survey by Software AG found that 50% of knowledge workers use personal AI tools on the job. Some do so because their IT departments don’t offer AI solutions, while others simply prefer different tools.
A Growing Trend with Risks
Peter (not his real name), a product manager at a data storage company, also bypasses company policy to use AI. His employer offers Google Gemini AI, but external AI tools are strictly banned. Still, Peter relies on ChatGPT via search tool Kagi to analyze competitor videos and generate insights quickly.
“The AI is not so much giving you answers as giving you a sparring partner,” he explains. “As a product manager, you have a lot of responsibility but limited outlets to discuss strategy. These tools allow that in an unlimited capacity.”
He estimates that his use of AI makes him about 30% more productive, the equivalent of having an additional employee working for free.
Despite its advantages, shadow AI poses serious security risks. According to Harmonic Security, which tracks AI tool usage in businesses, over 5,000 AI apps are currently being used without IT approval. Many AI models train on user inputs, raising concerns about sensitive corporate data being stored and potentially exposed.
“It’s pretty hard to extract data from these AI tools,” says Alastair Paterson, CEO of Harmonic Security. “But the bigger issue is that firms have no control or visibility over where their data is going.”
A New Approach to AI Governance
Rather than fighting shadow AI, some companies are embracing the shift.
Trimble, a software and hardware provider, developed Trimble Assistant, an internal AI tool built on the same technology as ChatGPT. Employees can use it for product development, customer support, and market research, ensuring AI is used securely within the company’s ecosystem.
“I encourage people to explore AI tools in their personal lives,” says Karoliina Torttila, Trimble’s director of AI. “But in a professional setting, there must be safeguards.”
She believes employees’ experiences with AI outside of work can help shape corporate policies, but companies need to maintain an ongoing dialogue about which tools serve them best.
‘Welcome to the Club’
Experts suggest companies take a pragmatic approach to shadow AI rather than attempting to ban it outright.
Simon Haighton-Williams, CEO of The Adaptavist Group, advises businesses to understand why employees use unauthorized AI and to find ways to integrate it safely into company workflows.
“Welcome to the club. Everybody has shadow AI,” he says. “Be patient, figure out what people are using and why, and manage it rather than shutting it down. You don’t want to be the company that gets left behind.”
With AI technology rapidly evolving, rigid policies may no longer be sustainable. Instead, organizations may need to adapt and embrace AI while ensuring security and compliance remain a top priority.
Technology
BBC Study Finds AI Chatbots Inaccurately Summarizing News Stories
A new investigation by the BBC has revealed that leading artificial intelligence (AI) chatbots frequently misrepresent and distort news stories, raising concerns over the accuracy and reliability of AI-generated content.
The research evaluated four major AI models—OpenAI’s ChatGPT, Microsoft’s Copilot, Google’s Gemini, and Perplexity AI—by feeding them content from the BBC website and asking them questions about the news. The findings showed that more than half of the AI-generated responses contained significant inaccuracies.
AI ‘Playing with Fire’ in News Summarization
Deborah Turness, CEO of BBC News and Current Affairs, warned in a blog post that while AI presents “endless opportunities,” it also carries serious risks.
“How long will it be before an AI-distorted headline causes significant real-world harm?” she asked, calling on AI companies to “pull back” their news summaries.
Key Findings from the BBC Study
The BBC’s investigation involved 100 news stories, with journalists—who were experts in the respective fields—fact-checking the AI-generated summaries. The results were concerning:
- 51% of AI responses contained significant issues
- 19% of responses that cited BBC content contained factual errors, including incorrect dates, numbers, and statements
- AI models failed to differentiate between fact and opinion, often editorializing content instead of summarizing it accurately
Specific inaccuracies included:
- Google’s Gemini falsely stating that the NHS does not recommend vaping as a smoking cessation aid
- ChatGPT and Copilot incorrectly claiming that Rishi Sunak and Nicola Sturgeon were still in office after they had left
- Perplexity misquoting BBC News in a story about the Middle East, incorrectly stating that Iran initially showed “restraint” while describing Israel’s actions as “aggressive”
Among the four AI models, the study found that Microsoft’s Copilot and Google’s Gemini produced the most significant inaccuracies, while OpenAI’s ChatGPT and Perplexity performed slightly better.
BBC Calls for AI Transparency and Control
In response to the findings, Pete Archer, the BBC’s Programme Director for Generative AI, emphasized that publishers should have control over how their content is used.
He urged AI companies to provide transparency on how their models process and summarize news while also revealing the scale and frequency of errors in their outputs.
Although the BBC typically blocks AI bots from accessing its content, it temporarily lifted these restrictions for the study in December 2024.
Turness stressed the need for a collaborative effort between news organizations and AI developers to find solutions, citing Apple’s decision to pull back its AI-generated news summaries after the BBC raised similar concerns.
As AI-powered news summaries become increasingly prevalent, the report highlights the potential dangers of misinformation and distortion, reinforcing the urgent need for greater oversight and responsible AI development in journalism.
Technology
TikTok Faces Lawsuit Over Tragic Deaths Linked to ‘Blackout Challenge’
A TikTok executive has acknowledged that data sought by grieving parents, who believe their children died attempting a viral challenge on the platform, may have already been removed due to legal and data protection policies.
The lawsuit, filed against TikTok and its parent company ByteDance, involves the deaths of Isaac Kenevan, Archie Battersbee, Julian “Jools” Sweeney, and Maia Walsh—all between 12 and 14 years old. The parents allege that their children lost their lives after attempting the “blackout challenge”, a dangerous trend where individuals intentionally deprive themselves of oxygen.
Parents Seek Answers, TikTok Cites Legal Limitations
Speaking to BBC Radio 5 Live on Safer Internet Day, Giles Derrington, senior government relations manager at TikTok, said the company was in contact with some of the parents and acknowledged their “unfathomably tragic” losses. However, he emphasized that TikTok might no longer have the data being requested.
“This is really complicated because it relates to the legal requirements around when we remove data,” Derrington explained. “We have, under data protection laws, requirements to remove data quite quickly. That impacts on what we can do.”
Families of the victims, however, have accused TikTok of lacking transparency and compassion.
Ellen Roome, mother of 14-year-old Jools, has been campaigning for legislation that would grant parents access to their deceased child’s social media accounts.
Lisa Kenevan, mother of 13-year-old Isaac, questioned TikTok’s handling of the situation, saying: “Why hold back on giving us the data? How can they sleep at night?”
Derrington defended the platform, stating that data deletion policies are legally mandated and that TikTok does not have hidden information it is withholding from parents.
“Everyone expects that when we are required by law to delete some data, we will have deleted it,” he said. “This is a more complicated situation than us just having something we’re not giving access to.”
Legal Battle and TikTok’s Defense
The Social Media Victims Law Center, a U.S.-based organization, is representing the families in court. The lawsuit alleges TikTok failed to enforce its own safety rules, allowing the blackout challenge to circulate widely in 2022 despite platform policies prohibiting content that promotes dangerous behavior.
TikTok, however, insists the blackout challenge predates the platform and that no evidence suggests it ever “trended” on TikTok.
“We have never found any evidence that the blackout challenge has been trending on the platform,” Derrington said. “Since 2020, we have completely banned even being able to search for the words ‘blackout challenge’ or variants of it, to make sure that no one is coming across that kind of content.”
TikTok’s Safety Measures Under Scrutiny
TikTok says it is investing heavily in content moderation, with over $2 billion (£1.6 billion) allocated this year alone and tens of thousands of human moderators reviewing posts globally.
The platform has also launched an online safety hub to provide resources for users and parents on how to stay safe.
Despite these measures, the grieving families believe more must be done to prevent similar tragedies and hold tech companies accountable.
“This is a really, really tragic situation,” Derrington said. “But we are constantly working to make sure that people are safe on TikTok.”
As the lawsuit unfolds, the case will likely fuel ongoing debates about social media responsibility, parental control over digital content, and the effectiveness of platform safety policies.
Technology
What Your Fingernails Reveal About Your Health
Fingernails serve more than just a cosmetic purpose—they protect the delicate skin underneath and help with everyday tasks like scratching an itch or peeling fruit. However, doctors say that nails can also offer valuable insights into a person’s overall health, sometimes even signaling serious medical conditions.
A Window to Your Health
Doctors have long used fingernails as a diagnostic tool to identify potential health problems. Changes in color, thickness, or shape can indicate anything from nutritional deficiencies to serious illnesses like lung disease or diabetes.
One of the most well-known signs doctors look for is clubbing, a condition where the nails curve downward and the fingertips appear swollen. “One of the first things I learned in medical school was to look for clubbing,” says Dr. Dan Baumgardt, a general practitioner and lecturer at the University of Bristol. Clubbing is often associated with low blood oxygen levels and can be an early sign of lung cancer, heart infections, or liver disease.
Color and Texture Clues
A healthy nail bed should be pink, with a white crescent-shaped lunula at the base. Any significant discoloration can indicate an underlying issue.
- White or yellow nails: These could point to fungal infections, especially in toenails. Dr. Holly Wilkinson, a wound healing expert at the University of Hull, warns that untreated fungal infections can become difficult to manage.
- Brittle nails: Weak, easily broken nails may suggest hypothyroidism or a vitamin B7 (biotin) deficiency.
- Horizontal ridges (Beau’s lines): These may indicate protein deficiency, diabetes, or peripheral vascular disease, a condition that restricts blood flow.
Spoon-Shaped Nails and Nutritional Deficiencies
Another condition, koilonychia, causes nails to become thin and concave, resembling spoons. This can be a sign of iron-deficiency anemia, where the body lacks enough healthy red blood cells to carry oxygen. In some cases, it can also indicate celiac disease.
“Nail shape can tell us a lot,” says Dr. Mary Pearson, a pediatrician at the University Hospital of Wales. “When we suspect chronic disease or malnutrition, examining a child’s nails can provide critical clues.”
Lifestyle Factors at Play
Not all nail changes point to serious health problems—some may be linked to everyday habits. Peeling nails (onychoschizia) can result from excessive handwashing, dry nails, or frequent use of acrylics and nail polish, according to Dr. Joshua Zeichner, a dermatologist at The Mount Sinai Hospital in New York.
While minor nail changes are often harmless, experts emphasize the importance of seeking medical advice if significant changes occur. Whether it’s a sign of an underlying condition or simply a call for better nail care, your fingernails may be telling you more than you think.
-
Politics10 months ago
Six Best Things Done by Donald Trump as President
-
Travel10 months ago
Embracing Solo Travel to Unlock Opportunities for Adventure and Growth
-
Health10 months ago
Rise of Liposuction: A Minimally Invasive Solution for Body Contouring
-
Technology10 months ago
Revolutionizing Healthcare Training with Latest Technologies
-
Education10 months ago
Exlplore the Top Universities in the United States for Computer Science Education?
-
Business10 months ago
Thriving Startup Hubs: Best Cities in the USA for Entrepreneurship
-
Health10 months ago
Digestive Power of taking Mint Tea after Meals
-
Travel10 months ago
Where to Move? America’s Top Ten Most Affordable Cities