security

Growing ChatGPT jitters from Korean tech firms – The Korea JoongAng Daily


A man looking for information using artificial intelligence through ChatGPT [SHUTTERSTOCK]

A man looking for information using artificial intelligence through ChatGPT [SHUTTERSTOCK]

Korean tech companies are pondering the potential downsides of ChatGPT, with Samsung Electronics restricting workplace access to the generative AI chatbot due to security concerns.
 
The Microsoft-backed generative AI chatbot has been a disruptive force across various industries with its human-like capabilities which include answering questions, writing essays, and fixing bugs in code.
 
However, fears over data leaks have grown as increasing degrees of sensitive information are entered into the platform.
 
Global companies such as Apple and Google have already internally banned the use of such generative AI chatbots over concerns that confidential data typed into these systems would get leaked. Korean companies have also grown concerned over possible data leaks.
 
Samsung Electronics recently confirmed it is pressing ahead with building its own generative AI system with an external partner.
 
Kyung Kye-hyun, president of Samsung Electronics’ semiconductor business, said that the company is developing its own version of ChatGPT to prevent possible data leaks but also to take advantage of the highly advanced system.
 
“There are mixed opinions on whether to use the ChatGPT or not,” said Kyung.
 
“I believe we have to use it. An engineer with six years of experience takes one hour to program a code while the ChatGPT takes 10 minutes. Why can’t we use such an excellent system? I am going to make [the company] use the ChatGPT from next year in any type of form.”
 

He said Samsung’s AI-backed chatbot will be made with a partner. A local media outlet reported that the partner would be Naver, an IT firm that operates Korea’s biggest search engine. However, both companies refused to confirm the news. 
 
“Collaborating with an external partner is true, but nothing has been decided yet,” said a spokesperson from Samsung Electronics.
 
“The development is at a very early stage.”
 
Samsung and Naver are already collaborating on developing AI chips.
 
Samsung has good reason to be hasty about countering data leakage as the company became one of the early victims.
 
An employee at Samsung Electronics was found to have typed in the source code of the company’s confidential program early this year. Another employee was also found to have typed in the entire content of a meeting in order to write meeting minutes.
 
Samsung Electronics’ device eXperience division restricted the system completely, after conducting a survey with employees on whether they think the system threatens the company’s privacy. Sixty-five percent of the respondents said the system has security risks.
 
Its device solutions division, which oversees the semiconductor business, has not entirely banned the system, but placed a limit on how many words a user can type into the system at once.
 
LG Electronics, another major electronics company in Korea, is taking an alternative approach with ChatGPT.
 
It has banned its employees from using the generative AI service around data leak concerns since May. It instead opened its own GPT-based chatbot on its in-house platform, L-Genie.
 
LG Electronics employees are allowed to use the generative AI chatbot, but the content they input into the system will not be used to update the GPT system for OpenAI, according to a source from the IT industry.
 
SK hynix, a Korean chipmaker, also banned the AI chatbot service completely from its network, except for special occasions.
 

Banning the use of ChatGPT is the most widely used solution to counter possible data leaks, but it would only be a quick fix, according to industry experts.
 
“ChatGPT, or any type of generative AI, is not a service you can go without in the long term,” said Kwak Jin, a professor at Ajou University’s department of Cyber Security. “It is an inevitable trend and companies that ban the service would have to give up on the efficiency it can offer.”
 
Setting up guidelines and educating the employees on how to appropriately use generative AI is the most practical solution for now.
 
“Banning the system cannot be an ultimate answer, because employees can access the service if they want to through an external network or through their smartphones,” says Professor Lee Sang-kyun from Korea University’s School of Cybersecurity.
 
“Setting up guidelines on the company-level, industry-level, and on the country-level is necessary so that people can be aware of the fact that they can involuntarily become a major culprit in spilling their companies’ most confidential data.”
 

Readers Also Like:  Stone Security Opens Global Technology Center - Security Sales & Integration

BY JIN EUN-SOO [jin.eunsoo@joongang.co.kr]





READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.