S&P 500DowNASDAQRussell 2000FTSE 100DAXCAC 40NikkeiHang SengASX 200ALEXALKBOHCPFCYANFHBHEMATXMLPNVDAAAPLGOOGLGOOGMSFTAMZNMETAAVGOTSLABRK.BWMTLLYJPMVXOMJNJMAMUCOSTBACORCLABBVHDPGCVXNFLXKOAMDGECATPEPMRKADBEDISUNHCSCOINTCCRMPMMCDACNTMONEEBMYDHRHONRTXUPSTXNLINQCOMAMGNSPGIINTUCOPLOWAMATBKNGAXPDELMTMDTCBADPGILDMDLZSYKBLKCADIREGNSBUXNOWCIVRTXZTSMMCPLDSODUKCMCSAAPDBSXBDXEOGICEISRGSLBLRCXPGRUSBSCHWELVITWKLACWMEQIXETNTGTMOHCAAPTVBTCETHXRPUSDTSOLBNBUSDCDOGEADASTETHS&P 500DowNASDAQRussell 2000FTSE 100DAXCAC 40NikkeiHang SengASX 200ALEXALKBOHCPFCYANFHBHEMATXMLPNVDAAAPLGOOGLGOOGMSFTAMZNMETAAVGOTSLABRK.BWMTLLYJPMVXOMJNJMAMUCOSTBACORCLABBVHDPGCVXNFLXKOAMDGECATPEPMRKADBEDISUNHCSCOINTCCRMPMMCDACNTMONEEBMYDHRHONRTXUPSTXNLINQCOMAMGNSPGIINTUCOPLOWAMATBKNGAXPDELMTMDTCBADPGILDMDLZSYKBLKCADIREGNSBUXNOWCIVRTXZTSMMCPLDSODUKCMCSAAPDBSXBDXEOGICEISRGSLBLRCXPGRUSBSCHWELVITWKLACWMEQIXETNTGTMOHCAAPTVBTCETHXRPUSDTSOLBNBUSDCDOGEADASTETH

Hawaii Businesses Face Data Breach Risks as AI Agents Bypass Security Controls

·8 min read·Act Now

Executive Summary

New vulnerabilities in AI agent platforms like Microsoft Copilot Studio and Salesforce Agentforce allow for unauthorized data exfiltration, necessitating immediate security audits and policy reviews for businesses relying on these tools. This discovery underscores a new class of security risks that cannot be resolved by traditional patching alone.

Action Required

High PriorityImmediate audit window for compromised systems is November 24, 2025 - January 15, 2026.

These vulnerabilities allow for data exfiltration with low complexity and no privileges, requiring immediate auditing of affected systems and potential remediation.

Businesses using Microsoft Copilot Studio or Salesforce Agentforce must immediately audit their systems for signs of compromise, particularly for activity between November 24, 2025, and January 15, 2026, if utilizing SharePoint forms with Copilot Studio. Implement runtime security solutions that monitor AI agent actions in real-time. For all AI agents, enforce the principle of least privilege and restrict outbound communication capabilities. Health providers must ensure strict HIPAA compliance by auditing AI handling of PHI and ensuring no unauthorized data transfers occur. Investors should update due diligence to include AI agent security assessments.

Who's Affected
Entrepreneurs & StartupsSmall Business OperatorsRemote WorkersInvestorsTourism OperatorsHealthcare ProvidersAgriculture & Food Producers
Ripple Effects
  • Increased cybersecurity costs and insurance premiums for businesses due to the need for advanced runtime security solutions.
  • Potential for stricter AI regulation and increased compliance burdens for businesses handling sensitive data.
  • Erosion of customer trust and potential negative impact on Hawaii's tourism sector if high-profile data breaches occur.
  • Challenges in talent acquisition and retention if concerns about AI-driven data compromise and job displacement increase.
A vintage typewriter outdoors displaying "AI ethics" on paper, symbolizing tradition meets technology.
Photo by Markus Winkler

Hawaii Businesses Face Data Breach Risks as AI Agents Bypass Security Controls

Recent discoveries of indirect prompt injection vulnerabilities in prominent AI agent platforms, specifically Microsoft Copilot Studio and Salesforce Agentforce, pose a significant risk to Hawaii businesses. These flaws, demonstrated by Capsule Security, allow attackers to trick AI agents into exfiltrating sensitive data—even when security mechanisms flag the activity. The vulnerabilities, tracked under CVE-2026-21520 for Copilot Studio and a similar unassigned flaw in Agentforce, highlight a critical gap in how AI systems process instructions and data, potentially impacting any enterprise utilizing these tools for automation and customer interaction.

The Change

Capsule Security identified two key vulnerabilities: "ShareLeak" in Microsoft Copilot Studio and "PipeLeak" in Salesforce Agentforce. These are not simple software bugs but fundamental exploits in how AI agents interact with external data and user inputs. ShareLeak leverages a gap between a SharePoint form submission and the Copilot Studio agent's context window. An attacker can inject malicious instructions into a public comment field, overriding the agent's original programming. This allows the agent to query connected SharePoint Lists for sensitive customer data and exfiltrate it via Outlook, bypassing even Microsoft's own safety flagging and Data Loss Prevention (DLP) systems, as the email was routed through a legitimate, though compromised, Outlook action.

Similarly, PipeLeak exploits Salesforce Agentforce, allowing a payload from a public lead form to hijack an agent without authentication. This exploit reportedly has no volume cap on exfiltrated CRM data, and employees who trigger the agent receive no notification it's happening. While Microsoft patched ShareLeak and assigned a CVE, Salesforce has yet to assign a CVE or issue a public advisory for PipeLeak, despite Capsule Security's findings. These exploits exploit a fundamental architectural weakness where AI models struggle to differentiate between trusted instructions and untrusted retrieved data, a pattern classified by OWASP as ASI01: Agent Goal Hijack.

The implications extend beyond these specific platforms. The fact that Microsoft assigned a CVE to a prompt injection vulnerability in an agent-building platform, rather than just a productivity assistant, signals a new category of exploit for which enterprises must prepare. Furthermore, the inability to eliminate this class of vulnerability through patching alone means that a continuous, runtime security approach is essential.

Who's Affected

This development directly impacts a broad spectrum of Hawaii businesses:

  • Entrepreneurs & Startups: Companies leveraging AI agents for customer service, data analysis, or internal automation may find their sensitive proprietary data, customer lists, or intellectual property at risk. Scaling these operations with AI now carries an inherent, unpatched risk.
  • Small Business Operators: Local businesses using Copilot Studio or similar tools for customer form submissions (e.g., appointment booking, feedback collection via SharePoint or public web forms) could see their customer contact information or internal operational data compromised.
  • Remote Workers: While less directly impacted by system-level exploits, remote workers relying on AI assistants for productivity could be unknowingly interacting with compromised data streams or contributing to data leakage if their tools are affected.
  • Investors: VCs and angel investors need to strengthen due diligence on portfolio companies using AI agents, assessing their exposure to data breaches and regulatory non-compliance, which could impact valuations and exit opportunities.
  • Tourism Operators: Hotels, tour companies, and vacation rentals using AI agents to manage bookings, customer inquiries, or feedback forms could have critical guest data, payment information, or personal details exfiltrated.
  • Healthcare Providers: Clinics and medical practices using AI agents for patient intake, scheduling, or data summarization face severe HIPAA compliance risks if patient Protected Health Information (PHI) is compromised through these vulnerabilities.
  • Agriculture & Food Producers: Businesses using AI agents for supply chain management, customer orders, or market analysis could have sensitive operational data, customer lists, or financial information exposed.

Second-Order Effects

  • Increased Cybersecurity Costs & Insurance Premiums: A rise in successful AI-driven data breaches will likely lead to higher operational costs for businesses needing to implement advanced runtime security solutions and increased cybersecurity insurance premiums, impacting small operators and startups disproportionately.
  • Stricter AI Regulation & Compliance Burden: Discovered vulnerabilities may accelerate regulatory scrutiny on AI agent usage, leading to new compliance requirements for data handling and disclosure, potentially increasing operational overhead for all businesses, especially those with international clients or investor bases.
  • Erosion of Customer Trust & Tourism Impact: A significant data breach involving customer data, particularly in the tourism sector, could lead to a severe loss of consumer trust, impacting visitor numbers and Hawaii's reputation as a safe destination for travel and investment.
  • Talent Acquisition & Retention Challenges: As AI agents become more integrated into workflows, fear of data compromise or job displacement due to AI could create challenges in attracting and retaining skilled talent, particularly in tech-focused industries within Hawaii.

What to Do

Given the direct impact and the nature of these vulnerabilities (exploits that bypass traditional patching), immediate action is required. The audit window for compromised systems is particularly critical, spanning from November 24, 2025, to January 15, 2026, for Copilot Studio.

For Entrepreneurs & Startups:

  • Act Now: Immediately review all AI agent deployments, especially those connected to customer data or external interactions. Audit logs for suspicious activity between November 24, 2025, and January 15, 2026, if using Copilot Studio with SharePoint forms. For Salesforce Agentforce, audit any agents triggered by public-facing forms and review data access logs.
  • Act Now: Implement a runtime security solution that monitors agent actions in real-time rather than relying solely on pre-deployment vulnerability scanning or patching. Consider solutions that analyze tool calls before execution.
  • Act Now: Re-evaluate the permissions granted to AI agents. Adhere to the principle of least privilege, ensuring agents only access the data and perform the actions strictly necessary for their function.

For Small Business Operators:

  • Act Now: If using Copilot Studio with SharePoint forms, thoroughly audit your systems for indicators of compromise within the November 24, 2025 - January 15, 2026 window. Check for any unusual outbound email activity related to SharePoint data.
  • Act Now: For any AI agent connected to customer forms or data entry points (e.g., lead forms in Salesforce), review the agent's data access scope. Restrict outbound communications from these agents to only approved, internal domains.
  • Watch: Monitor official advisories from Microsoft and Salesforce regarding these vulnerabilities and any new mitigation strategies. Consider enhanced human oversight for critical operations handled by AI agents as an interim measure.

For Investors:

  • Act Now: Update your due diligence checklists to include specific questions about AI agent security protocols, particularly regarding prompt injection risks and runtime monitoring. Evaluate the exposure of your portfolio companies.
  • Watch: Monitor market developments for AI security solutions that address runtime enforcement and agent goal hijacking. Companies providing such solutions may represent new investment opportunities.

For Tourism Operators:

  • Act Now: Audit all AI agents used for booking, customer service, or feedback collection. Pay close attention to any agents interacting with public-facing forms or internal customer databases (like CRM in Salesforce Agentforce).
  • Act Now: Restrict outbound email capabilities for any AI agents. If an agent needs to send external communications, ensure it's routed through a secure, human-verified channel.
  • Watch: If using Copilot Studio and SharePoint forms, actively audit logs for any suspicious data exfiltration attempts between Nov 24, 2025, and Jan 15, 2026.

For Healthcare Providers:

  • Act Now: For any AI agent handling patient data (PHI), conduct an immediate and thorough audit of its access controls and communication logs. The potential for HIPAA violations via data exfiltration is severe.
  • Act Now: Ensure all AI agents adhere to the strictest data minimization principles and least privilege. Disable any direct external communication channels for these agents unless absolutely essential and heavily secured.
  • Act Now: Implement robust runtime monitoring for any AI agents processing PHI. This should include monitoring for anomalous data access patterns and unauthorized data transfers.

For Agriculture & Food Producers:

  • Act Now: Review any AI agents used for operational management, customer orders, or supplier interactions. Audit their connection to sensitive data sources and external communication capabilities.
  • Act Now: If using tools like Copilot Studio with forms that collect customer or supplier information, audit for suspicious data activity during the specified vulnerability window (Nov 24, 2025 - Jan 15, 2026).
  • Watch: Stay informed about how AI agent security develops, as increased operational efficiency through AI could be undermined by security risks.

Further Information

This risk brief is based on disclosures from Capsule Security and analyses by VentureBeat. For more technical details and recommended controls, refer to:

Given that these vulnerabilities represent a new, persistent threat that traditional patching cannot fully resolve, adopting a robust runtime security posture is paramount for all Hawaii businesses leveraging AI agents.

More from us