Ride the Lightning
Cybersecurity and Future of Law Practice Blog
by Sharon D. Nelson Esq., President of Sensei Enterprises, Inc.
Samsung Employees Reportedly Leaked Sensitive Data to ChatGPT
April 12, 2023
Engadget reported on April 7 that soon after Samsung’s semiconductor division started allowing engineers to use ChatGPT, workers leaked secret info to it on at least three occasions, according to The Economist Korea (as spotted by Mashable). One employee reportedly asked the chatbot to check sensitive database source code for errors, another solicited code optimization and a third fed a recorded meeting into ChatGPT and asked it to generate minutes.
Reports suggest that, after learning about the security failures, Samsung attempted to limit the extent of future problems by restricting the length of employees’ ChatGPT prompts to a kilobyte, or 1024 characters of text. Reportedly, the company is investigating the three employees in question and building its own chatbot to prevent similar mistakes.
ChatGPT’s data policy states that, unless users explicitly opt out, it uses their prompts to train its models. The chatbot’s owner, OpenAI, urges users not to share secret information with ChatGPT in conversations as it’s “not able to delete specific prompts from your history.” The only way to get rid of personally identifying information on ChatGPT is to delete your account — a process that can take up to four weeks.
Before you ask ChatGPT to summarize important memos or review your work for errors, remember that anything you share with ChatGPT could be used to train the system and perhaps pop up in its responses to other users. The Samsung employees who caused the mishap probably should have been aware of this before they reportedly shared confidential information with the chatbot.
Sharon D. Nelson, Esq., President, Sensei Enterprises, Inc.
3975 University Drive, Suite 225, Fairfax, VA 22030
Email: Phone: 703-359-0700
Digital Forensics/Cybersecurity/Information Technology