Security Risks from Deploying Generative AI: Google Gemini vs. Microsoft Copilot

Security Risks from Deploying Generative AI: Google Gemini vs. Microsoft Copilot

Companies are eager to deploy generative AI assistants like Google Gemini and Microsoft Copilot, but security professionals have concerns. Privacy issues, manipulation by threat actors, and exposure of sensitive data are all risks organizations must be aware of when using these tools. 

The time savings and economic gains of AI chatbots are enticing. Gemini and Copilot promise to transform the workplace, saving employees hours on tedious tasks and boosting efficiency. 

According to McKinsey, generative AI’s impact on productivity could add $2.6 trillion to $4.4 trillion in value to the global economy each year. By 2030, these tools could automate 30% of hours worked today.

However, companies must understand the information security risks of AI assistants before deploying them in their organizations. 

We’ll give an overview of: 

  1. The main differences between Google Gemini and Microsoft Copilot.
  2. The four major security risks for each tool.
  3. Methods to help your company reduce these risks.

What is Google Gemini?

Gemini is Google’s family of Large Language Models (LLMs) and comprises three different model sizes:

  1. Nano: Designed for on-device processing and other lightweight applications.
  2. Pro: Designed for efficiently scaling across a variety of tasks.
  3. Ultra: Designed for complex tasks and as a competitor to OpenAI’s GPT-4.

Gemini for Google Workspace is an AI-powered assistant designed for companies and built into Gmail, Docs, Sheets, and more. The tool is a productivity aide that helps employees with tasks like writing client outreach emails, project plans, and job postings. 

The AI assistant in Google Workspace has five main use cases: 

  1. Help me write.
  2. Help me organize.
  3. Create an image.
  4. Help me connect.
  5. Help me create an app.

These functions increase productivity in Google services like Docs, Sheets, Gmail, Slides, Meet, and AppSheet. They help employees write copy, make project trackers, and create AI images, as well as enhance video and sound quality in meetings and create applications without writing code. 

Gemini can speed up projects and reduce time spent on daily tasks. However, its capabilities are currently not as strong as Copilot for Microsoft 365, and it’s more of a tool that helps you organize and optimize what you’re already doing in Google Workspace. 

Who can use it: To use the tool in your organization, you’ll need a Google Workspace plan. The assistant is now generally available to companies of all sizes, and it’s possible to try it at no extra cost. 

Available plans: For companies, two Gemini for Google Workspace plans are available: Gemini Business and Gemini Enterprise. Gemini Advanced is also available for individual users’ personal accounts, but not for work accounts or in some countries. 

Gemini Business

  • $20 per user per month for an annual commitment.
  • Gemini is available in Gmail, Docs, Slides, Sheets, and Meet.
  • Has access to Gemini with 1.0 Ultra.
  • Enterprise-grade security and privacy.

Gemini Enterprise

  • $30 per user per month for an annual commitment.
  • 1.0 Ultra model.
  • Includes everything from Gemini Business, plus:
    • Advanced meetings with translated captions in 15+ languages.
    • Full access and usage of Gemini.

Gemini Advanced (Google One AI Premium)

  • $19.99 per month
  • 1.0 Ultra model.
  • Gemini in Gmail, Docs, and more.
  • For individual users
  • Not available for work accounts, for minors, or in some countries.

For more information on these plans, visit here

What is Copilot for Microsoft 365?

Microsoft Copilot’s paid enterprise and business version: Copilot for Microsoft 365 is available to companies using Microsoft 365 that choose to purchase the upgrade. 

The AI assistant integrates with Microsoft 365 apps including Teams, Word, Outlook, PowerPoint, Excel, Meet, and more. It helps employees streamline work-related tasks, from summarizing research to staying engaged in meetings. 

Copilot can also search and gather data from all Microsoft 365 documents, emails, calendars, presentations, contacts, and more. It analyzes employee behavior when using Microsoft apps and offers tips and shortcuts for greater productivity. 

Copilot allows organizations to gain context across multiple apps and services in their Microsoft 365 tenants. Using Copilot Studio, enterprise companies can further tailor Copilot for Microsoft 365 to their workplace or create custom chatbots of their own. 

Who can use it: 

  • Enterprise customers must have a license for Microsoft 365 E3, Microsoft 365 E5, Office 365 E3, or Office 365 E5. 
  • Business customers must have Microsoft 365 Business Standard or Business Premium, or a version of these suites that does not include Microsoft Teams. 
  • Education customers must have a license for Microsoft 365 A3 or Microsoft 365 A5 for faculty. 
  • Consumers are not currently eligible to purchase Copilot for Microsoft 365.

Available plans: Copilot for Microsoft 365 offers two plans: one for businesses and one for enterprises. It’s also included in other Microsoft Copilot plans including Copilot for Sales and Copilot for Service; more information on these plans here.

Copilot for Microsoft 365: Business

  • $30 per user per month for an annual commitment. Pay yearly: $360 per user, per year. 
  • Integrated with Teams, Word, Outlook, PowerPoint, Excel, Edge for Business, and other Microsoft 365 apps. 
  • AI-powered chat with Microsoft Copilot.
  • Enterprise-grade security, privacy, and compliance.

Copilot for Microsoft 365: Enterprise

  • $30 per user per month for an annual commitment. Pay yearly: $360 per user, per year. 
  • Integrated with Teams, Word, Outlook, PowerPoint, Excel, Edge for Business, and other Microsoft 365 apps. 
  • AI-powered chat with Microsoft Copilot.
  • Includes Copilot Studio, to create plug-ins for your data and automation. 
  • Enterprise-grade security, privacy, and compliance.

For more information on all nine Microsoft Copilot plans and their pricing, visit here.

Security Risks of Google Gemini

What are the four main security risks of Google Gemini? 

Gemini comes with four security risks that companies and consumers should consider. These four risks are system prompt leakage, indirect injection attacks, bypassing content restrictions, and privacy concerns. 

These risks are particularly important for developers using the Gemini API, users who have Gemini Advanced, companies who connect Gemini to Google Workspace, and all users using Gemini Apps. Some risks could affect both Gemini Pro and Gemini Ultra models.

Note: These are only a few risks that security researchers have written about and tested. We anticipate with the further rollout and new Gemini offerings, greater potential threats will come to light. 

1. Gemini Security Risk: System prompt leakage

System prompt leaks are a serious concern because they could reveal hidden instructions used to guide the large language model. Malicious actors could then potentially reverse engineer this information and use it to craft more effective attacks.

System prompt leaks may also expose sensitive data contained within the prompt, such as passwords. Researchers at HiddenLayer cleverly bypassed Gemini’s restrictions by using a “synonym attack.” Instead of directly asking for the tool’s system prompt, they rephrased their question and tricked Gemini into revealing its “foundational instructions in a markdown code block.”

First attempt by security researchers. Source: HiddenLayer

The second attempt using a synonym attack. Source: HiddenLayer

This attack exploits the Inverse Scaling property found in LLMs. Basically, the bigger the LLM, the harder it is to fine-tune it against every possible attack. This leaves them more susceptible to being tricked with slightly different phrasings that the creators might not have included in their initial training.

2. Gemini Security Risk: Indirect prompt injection attacks

Indirect prompt injection attacks allow threat actors to obtain complete control of AI assistants.

Kai Greshake originally identified this vulnerability that manipulates LLMs through non-textual means. The technique existed in Bard’s early stages, where attackers could exploit Google Docs to inject malicious instructions, but Google addressed the issue by removing the feature.

However, the release of Gemini Advanced, an extension that integrates Google Workspace with Gemini, reintroduced this risk. The HiddenLayer team showed an example of how this attack could work by using a blank Google document with some instructions in it and connecting it to Gemini Advanced via the Google Workspace extension. 

The instructions were written in a way that allowed the model to override its instructions and execute commands in a delayed manner, enabling more complex and obfuscated attacks, according to the team. 

Source: HiddenLayer

This situation worsens with Google file sharing, HiddenLayer said.  

They wrote: 

“This attack gets even scarier when you consider the implications of Google document sharing. A user could share a document with you without your knowledge and hide an instruction to pull the document in one of your prompts. From there, the attacker would have full control over your interactions with the model.”

The researchers recommend that users who use Gemini Advanced should make sure their Google Workspace extension access is disabled. This will ensure that shared documents will not affect their use of the model.

For information on how your company can secure file sharing in Google Workspace and Microsoft 365, visit here

3. Gemini Security Risk: Bypassing content restrictions

Cunning jailbreaking attacks allow researchers to bypass guardrails and fool Gemini into generating misinformation or harmful content. 

For example, the HiddenLayer researchers were able to trick Gemini Ultra, the most advanced version of Gemini, into revealing instructions on how to hotwire a Honda Civic. 

Source: HiddenLayer

This tactic isn’t new. Researchers have already demonstrated that other LLMs, including ChatGPT, can be manipulated using similar jailbreak attacks​​ to bypass their safety features and access restricted information. 

4. Gemini Security Risk: Privacy concerns in Gemini Apps

Gemini Apps carries the risk of violating user privacy by collecting information from users’ conversations, locations, feedback, and more.

Users should be aware when using Gemini Apps, including the Gemini web and mobile apps, about the warnings Google outlines in its Gemini Apps Privacy Hub

When you interact with Gemini applications, Google collects the following information: Conversations, Location, Feedback, and Usage information. 

In a section titled, “Your data and Gemini Apps,” Google writes this:

“Please don’t enter confidential information in your conversations or any data you wouldn’t want a reviewer to see or Google to use to improve our products, services, and machine-learning technologies.”

Google clarifies how long that data is retained: 

Gemini Apps conversations that have been reviewed by human reviewers are not deleted when you delete your Gemini Apps activity because they are kept separately and are not connected to your Google Account. Instead, they are retained for up to three years.

It concludes with:

“Even when Gemini Apps Activity is off, your conversations will be saved with your account for up to 72 hours. This lets Google provide the service and process any feedback. This activity won’t appear in your Gemini Apps Activity.”

Long story short: If you’re using Gemini Apps, don’t include sensitive information in your Gemini interactions, ever. 

Security Risks of Microsoft Copilot

What are the four main security risks of Copilot for Microsoft 365?

The main security risks of Microsoft Copilot are oversharing and bad permissions, privacy concerns, insider risks, and external threat actors.

The overarching issue is that Copilot can leverage all the data that an employee has access to. If the employee accidentally has access to sensitive data, then so will Copilot. This could have extensive security implications if overshared sensitive data gets into the wrong hands. 

For more information on how to reduce Copilot security risks, visit here.

1. Copilot Security Risk: Oversharing and bad permissions

One issue we see in our work at Nira is the oversharing of sensitive data within organizations. 

A staggering 16% of an organization’s critical data is exposed due to oversharing, according to a 2023 report. That adds up: companies face an average of 802,000 files containing “business-critical data” at risk due to oversharing. 

Employees will share hundreds of thousands of company files with coworkers, customers, vendors, and other third parties, giving them stronger permissions than needed or never revoking their access. A company’s sensitive files could be seen by users who do not have the proper access permissions, leading to data breaches or leaks.

Using Microsoft Copilot—a tool that makes information easier for employees to access than ever—exacerbates the risk of data exposure significantly.

Often, employees have broader access to company data than necessary. Sharing Microsoft documents is effortless—as easy as adding collaborators or opening a link that anyone in the company or on the internet can access. To make matters worse, permission restrictions are seldom removed when access is no longer required.

A Nira survey revealed that more than half of employees admitted to accidentally adding personal email accounts to company documents. And, although 40% of employees said they cleaned up vendor access on documents once an engagement was over, the rest (60%) only did clean up sometimes, rarely, or never. 

To reduce risks associated with Copilot, companies must ensure permissions are accurate and company information isn’t being shared excessively.

2. Copilot Security Risk: Privacy concerns

Copilot offers a powerful productivity boost, but its access to your organization’s data raises compliance, legal, and privacy considerations. Companies should take precautions like changing Copilot’s privacy settings or sanitizing information that Copilot is given access to.

According to Microsoft, Copilot for Microsoft 356 can access your organization’s data through Microsoft Graph, including emails, chats, and documents that you have permission to access. 

The suggestions from Copilot are relevant and based on your data, but this “also means that Microsoft Copilot may expose sensitive or confidential information in its responses.” 

Microsoft warns that you should always double-check Copilot’s content before using or sharing it with anyone else. Although the tool does not keep the prompts you send or receive, it does collect data about how you interact with it for improvement and diagnostic purposes. 

This data is encrypted and anonymized, but it could still contain parts of your personal or organizational information. Users can choose not to share this data with Microsoft by changing their privacy settings in Microsoft 365 apps.

3. Copilot Security Risk: Insider risks

Employees misusing Copilot, either by accident or via prompt hacking, could lead to data breaches or leaks. Employees don’t always realize how much data they have access to, which may expose sensitive information.

For instance, Nick Ross of T-Minus365 used Copilot to find references to a specific topic. Nick was surprised when the tool revealed “a citation of a document I didn’t know existed and a Teams Channel that I didn’t know I was a member of.” While Nick’s case involved non-sensitive data, the risk lies in employees unintentionally surfacing confidential information.

Although most incidents are accidental, malicious exfiltration can happen. By crafting specific prompts (prompt engineering), insiders might trick the tool into revealing unauthorized information.

In extreme scenarios, insiders could compromise the security of the Copilot software or its underlying model by exploiting vulnerabilities or introducing malicious code. 

4. Copilot Security Risk: External threat actors

Threat actors could compromise an account with AI-enabled features and work to exfiltrate sensitive data, creating a new front in cyberattacks.

For example, an attacker gains access to an account with Copilot for 365 and attempts to use it to access sensitive data from Microsoft Graph. 

Or a threat actor leaks a response to a prompt that contains confidential information. For example, using a prompt that asks for a financial report or leaking a response that reveals a trade secret. 

If attackers gain access to a Copilot-enabled account, company data is at risk. 

Companies need the right security features and data protection policies in their Microsoft 356 tenants (i.e., OneDrive, SharePoint) to avoid Copilot misuse. 

How to Prepare Your Company to Deploy AI Assistants

1. Understand Your Company’s Risks in Google Workspace and Microsoft 365

The key to deploying AI assistants is having complete visibility into your company’s Google Workspace and Microsoft 365 environments and understanding their risks. 

Your company’s cloud collaboration environments must have the proper permissions, access controls, and security policies set in place. You should be able to monitor and investigate risks with ease and quickly change access permissions when necessary.

Microsoft suggests several steps for configuring your organization’s tenant, which you may be familiar with if you’ve worked with a company that uses Microsoft. For best practices on securing company data and documents in Google Workspace, visit here

2. Create a security plan for deployment

Having a plan for protecting your data in Google Workspace and Microsoft 365 is a vital resource before enabling AI assistants. 

Deploying a robust security plan requires a comprehensive understanding of your company’s cloud collaboration environment. Here are the points to consider:

  1. Data Classification: Identifying sensitive data is the first step. What does your company consider to be sensitive data? Where is it stored in your company (i.e., Google Drive, Microsoft SharePoint, OneDrive, and Teams)? 
  2. Data Sharing: Companies must understand internal and external sharing practices, including how employees share files and links. 
  3. Access Controls: Who has access to shared company data and files, and what permissions do they hold?
  4. Data Loss Prevention: Does your company employ Microsoft or Google Workspace Data Loss Prevention (DLP) tools or other solutions?
  5. Access Management: Are current access controls for users and groups adequate? Have access policies been updated based on identified risks?
  6. Data Lifecycle: Does your company have a plan for data and access control throughout its lifecycle, including data retention policies?
  7. Employee Training: Are there ways to educate employees on sharing risks and permissions?
  8. Permission Management: Do employees have tools to clean up access misconfigurations and maintain information security?
  9. AI Impact: If your company deploys Google Gemini or Microsoft Copilot, what would be the impact on your organization? What sensitive information could be exposed?

3. Educate and train employees

Employees are crucial allies in protecting company data, but without the proper training and tools, they are left vulnerable. Tight schedules (a reason many use AI assistants) leave them with limited resources and time for access control. The burden of security awareness and enforcement often falls on IT and Security teams, who are frequently overloaded themselves.

To empower employees when using AI assistants, it’s vital to equip them with resources that clarify guidelines and potential risks. Understanding permissions and access levels for company data, especially sensitive files, is crucial.

For more information on how employees can gain visibility and control over access to their Google Drive, Microsoft SharePoint, OneDrive, and Teams files, visit here.

How to Secure Your Company’s Microsoft 365 and Google Workspace

Generative AI assistants will become ubiquitous as more organizations move to adopt them. It’s vital to understand their offerings and get a handle on security risks before rolling out these tools. 

While Copilot currently has more business-to-business use cases than Gemini for Workspace, Microsoft provides a roadmap that Google is likely to follow. Many of the same risks, including bad permissions and oversharing, will become relevant for Gemini as the AI assistant develops. 

When deploying and using generative AI tools, companies must secure their entire Google Workspace and Microsoft 365 environments. For help securing your company’s cloud collaboration tools, reach out here

 

Incredible companies use Nira

Every company that uses Google Workspace should be using Nira.
Bryan Wise
Bryan Wise,
Former VP of IT at GitLab

Incredible companies use Nira