The risks of inserting confidential business data into AI "chat" platforms

The risks of inserting confidential business data into AI “chat” platforms

Amidst the excitement generated by AI chat models like ChatGPT, which allow for a wide range of tasks and can be used for numerous activities, one fact is being overlooked: Employees are sharing confidential organizational information in conversational prompts without realizing they may be feeding language models with such information, which could be made available to any user in future queries. This fact can have numerous implications for companies and employees, undoubtedly causing harm to both parties. Let's delve into this discussion!

The use of language models like ChatGPT by employees to send sensitive business data and private information raises concerns about data security.

According to an article published by the British group Informa, concern over the security of confidential business data has escalated, as it was found that over 4% of employees from client companies provided such information to ChatGPT.

There is apprehension among organizations that the popularity of language models may facilitate large-scale leaks of proprietary and confidential information belonging to private companies, often revealing strategic data.

There is a risk that this information may be incorporated into artificial intelligence (AI) models and subsequently accessed by other users without proper concern for data security.

As reported in the article, the data security company Cyberhaven blocked data input requests to ChatGPT from 4.2% of its 1.6 million client company employees due to this risk.

In one case, an executive sent a strategic document of the company to ChatGPT to create a PowerPoint presentation.

In another case, a doctor shared a patient’s name and medical condition with ChatGPT to draft a letter to the insurance company.

This leads to the belief that as more employees use these AI services as productivity tools, the risk of leaking confidential information will increase.

Given this scenario, it is recommended that employers adopt preventive measures, such as including in employee confidentiality agreements and policies a prohibition on inserting confidential information, trade secrets, etc., into AI chatbots or language models like ChatGPT.

This will prevent employees from using protected information without permission, creating legal risks for employers.

According to experts, considering that ChatGPT has been trained on vast amounts of online information, employees may receive and use information that belongs to someone else or another entity, such as trademarks, copyrighted material, or intellectual property, which poses a legal risk to employers.

For this reason, it is of utmost importance that companies, through their compliance programs, start considering these issues when allowing employees to use AI tools and comprehensively analyze the impacts that improper use can cause.

Did you find it informative to learn about the risks of sharing confidential business information with ChatGPT? Leave your comment!

QMS Certification

QMS is an accredited third party certification body, it is currently present in 33 countries and focuses on the certification of management systems. QMS America is managed by the US office and has consistently grown in market recognition by technical level, customer satisfaction and competitive pricing.

Scroll to Top