The use of language models like ChatGPT by employees to send sensitive business data and private information raises concerns about data security.
According to an article published by the British group Informa, concern over the security of confidential business data has escalated, as it was found that over 4% of employees from client companies provided such information to ChatGPT.
There is apprehension among organizations that the popularity of language models may facilitate large-scale leaks of proprietary and confidential information belonging to private companies, often revealing strategic data.
There is a risk that this information may be incorporated into artificial intelligence (AI) models and subsequently accessed by other users without proper concern for data security.
As reported in the article, the data security company Cyberhaven blocked data input requests to ChatGPT from 4.2% of its 1.6 million client company employees due to this risk.
In one case, an executive sent a strategic document of the company to ChatGPT to create a PowerPoint presentation.
In another case, a doctor shared a patient’s name and medical condition with ChatGPT to draft a letter to the insurance company.
This leads to the belief that as more employees use these AI services as productivity tools, the risk of leaking confidential information will increase.
Given this scenario, it is recommended that employers adopt preventive measures, such as including in employee confidentiality agreements and policies a prohibition on inserting confidential information, trade secrets, etc., into AI chatbots or language models like ChatGPT.
This will prevent employees from using protected information without permission, creating legal risks for employers.
According to experts, considering that ChatGPT has been trained on vast amounts of online information, employees may receive and use information that belongs to someone else or another entity, such as trademarks, copyrighted material, or intellectual property, which poses a legal risk to employers.
For this reason, it is of utmost importance that companies, through their compliance programs, start considering these issues when allowing employees to use AI tools and comprehensively analyze the impacts that improper use can cause.
Did you find it informative to learn about the risks of sharing confidential business information with ChatGPT? Leave your comment!