Samsung employees have unwittingly leaked top secret data by providing them to the popular chatbot service ChatGPT.
Samsung employees have shared internal documents, including meeting notes and source code, with the popular chatbot service ChatGPT. ChatGPT uses data provided by the users to train itself and build its experience, with the risk that this data can be available to other users that will query the popular chatbot.
Samsung engineers used ChatGPT to assess the company source code, they asked the chatbot to optimize test sequences for identifying faults in the chips they were designing. According to the website Techradar, in just under a month, the company suffered three data leaks caused by its employees leaking sensitive information via ChatGPT.
“In another case, an employee used ChatGPT to convert meeting notes into a presentation, the contents of which were obviously not something Samsung would have liked external third parties to have known.” reported TechRadar.
The multinational IT firm has decided to start developing its own AI for internal use.
Samsung Electronics is warning its employees of the potential risks associated with the use of ChatGPT, explaining that there is no way to prevent the leak of the data provided to OpenAI’s chatbot service.
It is not clear if Samsung has requested the deletion of the data provided by its workers to OpenAI.
Early this month, the Italian Data Protection Authority, Garante Privacy, temporarily banned ChatGPT due to the illegal collection of personal data and the absence of systems for verifying the age of minors.
The Authority pointed out that OpenAI does not alert users that it is collecting their data.
According to the announcement, there is no legal basis underpinning the massive collection and processing of personal data to ‘train’ the algorithms on which the platform relies.
[출처 : SecurityAffairs / 4.10.]