Check out our AI for small business resources page for videos, best practices and more content to understand how AI can support your business.

A recent article highlighted just how quickly AI risk can escalate when powerful tools are deployed without governance. The piece examines OpenClaw, a fast-growing opensource, agentic AI platform designed to “actually do things”—from managing calendars and emails to executing scripts and automating tasks.  

Unlike conversational AI, agentic AI operates with action-level access. OpenClaw allows users to install third party “skills” (similar to browser extensions) that expand what the AI can do. That flexibility is also where the risk emerged. 

Security researchers discovered hundreds of malicious extensions uploaded to OpenClaw’s marketplace, some of which were designed to steal credentials, API keys, and sensitive data. These extensions were able to operate because the platform’s open ecosystem allowed anyone to publish skills, with limited oversight. 

While OpenClaw is now introducing safeguards such as scanning for malicious intent, reporting mechanisms and account age requirements, the incident underscores a critical lesson for businesses: 

When AI tools are powerful, extensible, and unguided, even well-intentioned usage can expose organizations to serious risk. 

This wasn’t a nation state attack or a sophisticated breach. It was a governance failure—the same kind that occurs inside organizations when employees adopt AI tools without clear rules, approved platforms, or security boundaries. 

Why This Matters for Businesses Using AI Today 

The OpenClaw situation mirrors what’s happening quietly inside many companies: 

  • Employees install AI extensions or agents to “move faster” 
  • Tools run with broad permissions by default 
  • No one has visibility into what data the AI can access—or act on 

In the case of OpenClaw, malicious extensions were able to operate inside legitimate workflows, highlighting how traditional security controls often fail to detect AI driven risk.  

For businesses, this reinforces an uncomfortable truth: 

AI risk doesn’t come from using AI, it comes from using AI without direction. 

The Safer Path Forward: Governed AI, Not Shadow AI 

Instead of relying on public, opensource, or unmanaged AI agents, organizations should prioritize tools that: 

  • Respect existing identity and access controls 
  • Honor data permissions and sensitivity labels 
  • Keep AI activity inside monitored, auditable environments 

Microsoft Copilot is designed with these principles in mind. It operates within your Microsoft 365 tenant, inheriting your security, compliance, and data governance policies, rather than bypassing them. 

That difference is critical. 

Copilot doesn’t require employees to install unknown extensions, paste data into public tools, or grant system level permissions to experimental agents. It enables productivity without expanding your attack surface. 

Let us help you get started with AI, safely. 

Whether you have planned for it or not, AI adoption is already happening in your business. 

The question isn’t if your team will use AI. 
It’s whether they’ll do it with guardrails or without them

If you’re exploring AI but want to avoid the risks highlighted by OpenClaw and similar tools, we can help you start the right way. 

Schedule a 15 minute AI discovery call to: 

  • Understand where AI is already being used in your business 
  • Learn how Copilot fits into your existing security and compliance model 
  • Identify safe, practical first use cases for your team 

No pressure. No jargon. Just clarity.