Meet “AI,” your new colleague: Could it expose your company’s secrets?

Guest Contributor
July 12, 2023

 


By Tony Anscombe

Tony Anscombe (AskESET@eset.com) is the Chief Security Evangelist with ESET, a leading global IT security company. With more than 20 years of security industry experience, Anscombe is an established author, blogger and speaker on the current threat landscape, security technologies and products, data protection, privacy and trust, and internet safety.

Chatbots, powered by large language models (LLMs), are not just the world’s new favourite pastime — the technology is increasingly being recruited to boost workers’ productivity and efficiency. Chatbots are created and used by software and tech giants like OpenAI, Google, and Meta.

Given its increasing capabilities, artificial intelligence is poised to replace some private and public sector jobs entirely, including in areas as diverse as coding, content creation, and customer service.

Many organizations have already tapped into LLM algorithms; chances are, yours will likely follow suit in the near future. But before you rush to welcome the new “hire” and use it to streamline some of your team’s workflows and processes, there are a few questions you should ask yourself.

Is it safe for my organization to share data with one of these applications?

LLMs are trained on large quantities of text and content available online, which enable these systems to interpret and make sense of people’s queries, also known as prompts. However, that text and content also includes those same queries. In other words, every time you ask a chatbot for a piece of code or a simple email to your client, you may also be handing over sensitive data about your company.

The LLM provider or its partners are able to read the queries and may incorporate them in some way into the content used for future queries. Chatbots may not forget or ever delete your input, since access to more data is what sharpens their output. The more input they are fed, the better they become, and your company or personal data will be caught up in the calculations — and may be accessible as a source.

Perhaps to dispel such data privacy concerns, Open AI in April introduced the ability to turn off chat history in ChatGPT. “Conversations that are started when chat history is disabled won’t be used to train and improve our models, and won’t appear in the history sidebar,” developers wrote in an Open AI blog.

Another risk is that queries stored online may be hacked, leaked, or accidentally made publicly accessible. The same applies to every third-party provider.

What are some known flaws?

Every time a new technology or a software tool becomes popular, it attracts hackers like bees to a honeypot. When it comes to LLMs, their security has been tight so far — at least, it seems so. There have, however, been a few exceptions.

OpenAI’s ChatGPT made headlines in March due to a leak of some users’ chat history and payment details, forcing the company to temporarily take ChatGPT offline on March 20. The company revealed on March 24 that a bug in an open source library “allowed some users to see titles from another active user’s chat history”. 

“It’s also possible that the first message of a newly-created conversation was visible in someone else’s chat history if both users were active around the same time,” according to the Open AI blog. “Upon deeper investigation, we also discovered that the same bug may have caused the unintentional visibility of payment-related information of 1.2 percent of the ChatGPT Plus subscribers who were active during a specific nine-hour window."

Have some companies already experienced LLM-related incidents?

In late March, the South Korean media outlet The Economist Korea reported on three independent incidents in Samsung Electronics. While the company asked its employees to be careful about what information they enter in their query, some of them accidentally leaked internal data while interacting with ChatGPT.

One Samsung employee entered faulty source code related to the semiconductor facility measurement database, seeking a solution. Another employee did the same with program code for identifying defective equipment, to optimize that code. The third employee uploaded recordings of a meeting, to generate meeting minutes.

To keep up with progress related to AI while protecting its data at the same time, Samsung has announced that it is planning to develop its own internal “AI service” that will help employees with their job duties.

What checks should companies make before sharing their data?

Uploading company data into the model means you are sending proprietary data directly to a third party, such as OpenAI, and giving up control over it. We know OpenAI uses the data to train and improve its generative AI model, but the question remains: is that the only purpose?

If you do decide to adopt ChatGPT or similar tools into your business operations in any way, you should follow a few simple rules:

  • Carefully investigate how these tools and their operators access, store and share your company data. Get curious: flip through the privacy policy and usage terms and conditions before using these applications. They are often quite long but can provide valuable insight into information being collected.
  • Develop a formal policy covering acceptable usage of generative AI tools within your business and consider how their adoption works with current policies, especially your customer data privacy policy.
  • The company’s privacy policy should define the circumstances under which your employees can use the tools and should make your staff aware of limitations, such as never putting sensitive company or customer information into a chatbot conversation.
  • Implement access controls and educate employees to avoid inadvertently sharing sensitive information. Use security software with multiple layers of protection, along with secure remote access tools and measures to protect data centres.
  • Turn off chat history to prevent queries stored online getting hacked, leaked, or accidentally made publicly accessible.

Adopt a similar set of security measures as with software supply chains in general and other IT assets that may contain vulnerabilities. People may think this time is different because these chatbots are more intelligent than artificial, but the reality is that this is yet more software, with all its possible flaws and security concerns.

R$


Other News






Events For Leaders in
Science, Tech, Innovation, and Policy


Discuss and learn from those in the know at our virtual and in-person events.



See Upcoming Events











Don't miss out - start your free trial today.

Start your FREE trial    Already a member? Log in






Top

By using this website, you agree to our use of cookies. We use cookies to provide you with a great experience and to help our website run effectively in accordance with our Privacy Policy and Terms of Service.