Samsung officially banned its employees from using generative AI tools like ChatGPT over “growing concerns about security risks presented by generative AI.” Meanwhile, in Italy, the national ban on ChatGPT (opens in new tab) was lifted after OpenAI complied with the orders of the nation’s privacy regulator’s demand for more disclosure and privacy tools.
In a memo viewed by Bloomberg News (opens in new tab), Samsung told staff that using AI tools like Google Bard and Bing that store information on external servers would pose a security risk. The ban will apply to its internal networks and company-owned devices such as PCs, phones, and tablets. Last month, it limited the use of the AI chatbot after some staff inadvertently leaked confidential information (opens in new tab) multiple times.
The memo reads, “While this interest focuses on the usefulness and efficiency of these platforms, there are also growing concerns about security risks presented by generative AI.”
Employees still using AI tools have been warned not to submit any company information involving any Samsung intellectual properties or risk “disciplinary action up to and including termination of employment.”
Samsung is developing internal AI tools for translating and summarizing documents. The main issue is that conversations with AI chatbots are all used to train its language learning model. So when you ask to summarize notes from a secret product meeting, the details are stored on a server you cannot access.
“HQ is reviewing security measures to create a secure environment for safely using generative AI to enhance employees’ productivity and efficiency,” Samsung wrote.
ChatGPT recently added an “incognito” mode that prevents your chats from being used to train its AI.
As ChatGPT is being banned in some places, other places, like Italy, have unbanned the generative AI chatbot. Italy allowed ChatGPT to resume operations after it was banned over privacy concerns last month. OpenAI recently announced a new set of privacy controls in compliance with a regulatory suspension from the EU.
OpenAI’s new privacy policy (opens in new tab) now has users confirm they are over 18, or 13 with a parent or guardian’s consent, to use ChatGPT. The company also provides more information about how they “develop and train” its AI. Additionally, the government wants OpenAI to provide tools to users to “exercise their rights and get falsities the chatbot generates about them rectified.”
The Italian government was concerned that the chatbot breached EU data protection laws and the GDPR (General Data Protection Regulation). OpenAI responded by blocking all Italian IP addresses in early April.
The new policy allows users to delete their chat history that’s used to train the AI’s algorithm, and OpenAI says it will work to fulfill its “compliance obligations.”
ChatGPT still has a way to go to appease Italian regulators. However, the Italian SA “acknowledges the steps forward made by OpenAI to reconcile technological advancements with respect for the rights of individuals, and it hopes that the company will continue in its efforts to comply with European data protection legislation.”