Learn to use AI like a Pro. Learn More

AI in a Vulnerable World

Microsoft’s Copilot AI: Friend or Foe?

Last updated:

Mackenzie Ferguson

Edited By

Mackenzie Ferguson

AI Tools Researcher & Implementation Consultant

At the Black Hat security conference, researcher Michael Bargury showcases disturbing vulnerabilities in Microsoft's Copilot, turning it into an automatic spear-phishing tool. Can this AI's benefits outweigh the risks?

Banner for Microsoft’s Copilot AI: Friend or Foe?

Microsoft's rapid integration of generative AI into its systems, exemplified by its Copilot AI, aims to boost productivity by extracting and organizing information from emails, Teams chats, and various other files. However, this very connectivity poses significant security risks. Recent research showcased at the Black Hat security conference by Michael Bargury, cofounder and CTO of Zenity, demonstrates several ways that malicious actors can exploit Copilot. The findings reveal vulnerabilities that could allow attackers to manipulate answers, extract sensitive data, and bypass security measures.

    One of the most alarming potential exploits showcased is the ability to convert Copilot into an automated spear-phishing machine. Termed LOLCopilot, this red-teaming code could permit hackers with access to a target's work email to mine contacts, mimic the target's writing style, and send personalized phishing emails en masse. This process, which normally requires considerable time and effort, can now be executed within minutes, increasing the risk of successful phishing attacks.

      Learn to use AI like a Pro

      Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.

      Canva Logo
      Claude AI Logo
      Google Gemini Logo
      HeyGen Logo
      Hugging Face Logo
      Microsoft Logo
      OpenAI Logo
      Zapier Logo
      Canva Logo
      Claude AI Logo
      Google Gemini Logo
      HeyGen Logo
      Hugging Face Logo
      Microsoft Logo
      OpenAI Logo
      Zapier Logo

      Bargury's research indicates that these exploits function by leveraging the large language model (LLM) capabilities of Copilot as designed, where simple textual queries can access extensive data. The malicious results arise when additional data or instructions are embedded within queries, prompting the AI to perform unintended and harmful actions. This highlights the precarious nature of integrating AI systems with corporate data, especially when external and untrusted data are incorporated into the mix.

        Further demonstrations by Bargury illustrate how a compromised email account could allow attackers to access sensitive information like employee salaries, bypassing Microsoft's safeguards for protected files. By manipulating the AI’s responses to omit references to sensitive files, hackers can extract data without triggering security alerts. Another attack scenario involves embedding malicious information within emails to manipulate Copilot’s interpretation of banking information, thereby diverting funds to unauthorized accounts.

          Perhaps more concerning is the ability of external harm-doers to extract sensitive corporate information. For instance, a hacker could infer details about an upcoming company earnings call's potential outcomes, or even transform Copilot into a 'malicious insider' that directs users to phishing websites. These forms of data manipulation and extraction underscore the challenges of balancing AI utility with robust security protections.

            Microsoft acknowledges the seriousness of these vulnerabilities, with Phillip Misner, head of AI incident detection and response, stating that the company has been collaborating with Bargury to assess and mitigate the risks. Misner emphasizes that the risks posed by AI post-compromise techniques are similar to traditional post-compromise methods, necessitating robust security prevention and monitoring across all environments and user identities.

              Learn to use AI like a Pro

              Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.

              Canva Logo
              Claude AI Logo
              Google Gemini Logo
              HeyGen Logo
              Hugging Face Logo
              Microsoft Logo
              OpenAI Logo
              Zapier Logo
              Canva Logo
              Claude AI Logo
              Google Gemini Logo
              HeyGen Logo
              Hugging Face Logo
              Microsoft Logo
              OpenAI Logo
              Zapier Logo

              The rapid development of generative AI systems, including Microsoft’s Copilot, OpenAI’s ChatGPT, and Google’s Gemini, has ushered in a new era of functionality where these AIs can perform complex tasks such as booking meetings and managing online activities. Nonetheless, the integration of unverified external data into these systems perpetuates significant security vulnerabilities, often through indirect prompt injections and data poisoning attacks.

                According to security experts like Johann Rehberger, the amplified efficiency of attackers using AI cannot be underestimated. Rehberger, a security researcher and red team director, notes that while companies like Microsoft have implemented various controls to protect systems like Copilot, persistent vulnerabilities exist. Rehberger's findings suggest that an over-permissive access to data within companies exacerbates these risks, especially when AI systems are employed without stringent oversight.

                  Bargury's research underscores the importance of monitoring AI outputs and verifying the legitimacy of the actions performed by AI systems. He discovered that Copilot's defenses can be systematically bypassed through specific prompts that unlock broader functionalities capable of performing unauthorized actions. This raises critical questions about the adequacy of current AI monitoring mechanisms and the need for enhanced scrutiny over AI interactions with sensitive corporate environments.

                    Both Bargury and Rehberger assert the necessity for improved monitoring and control over AI systems. This entails not just scrutinizing what the AI produces and sends out but also understanding the context and appropriateness of these actions within organizational workflows. As AI continues to evolve, companies must adapt their security protocols to safeguard against sophisticated AI-enabled threats, ensuring that AI advancements do not come at the expense of security integrity.

                      Recommended Tools

                      News

                        Learn to use AI like a Pro

                        Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.

                        Canva Logo
                        Claude AI Logo
                        Google Gemini Logo
                        HeyGen Logo
                        Hugging Face Logo
                        Microsoft Logo
                        OpenAI Logo
                        Zapier Logo
                        Canva Logo
                        Claude AI Logo
                        Google Gemini Logo
                        HeyGen Logo
                        Hugging Face Logo
                        Microsoft Logo
                        OpenAI Logo
                        Zapier Logo