security

Tech View: AI tools can create risks to data security – Honolulu Star-Advertiser


Over the past few months, we have seen a massive surge in the popularity of “artificial intelligence” tools, with OpenAI’s ChatGPT and Google’s Bard platforms dominating the news.

With all new technical advancements, it’s important for businesses to understand the potential risks they introduce, as well as the limits of their functionality, to ensure prudent policies adopted for their use.

What are large language models?

Generically referred to as artificial intelligence, Chat­GPT and Bard are considered large language model (LLM) programs.

What is an LLM? Imagine you have a giant Lego set with millions of pieces. You can build anything you want with it, like a house, a car or even a spaceship. An LLM is like that Lego set, but instead of building things with physical objects, it builds things with words.

LLMs function by being “trained” with a massive amount of text data from various sources. This training data helps the model learn language rules, such as grammar, spelling and meaning. Once the model has learned everything from the training data, it can generate new language on its own, often delivered as text. It can answer questions, write stories and even chat with users.

LLMs are becoming increasingly popular in businesses, as they can be used for a variety of tasks, such as generating marketing copy, writing customer support tickets and even creating chatbots. However, the use of LLMs also raises questions about accuracy and creates data security risks.

What are the risks?

One of the biggest risks is that LLMs can generate inaccurate or fake content. The media mistakenly call these tools artificial intelligence when in reality they have no actual intelligence. At a very simplified level, LLMs function by just guessing the next word they should insert by reviewing the probability of that being correct against all the other words it knows. This means it has no real ability to fact-check itself since it doesn’t even know what it is telling you. It is just inserting words in the most likely correct order. There are no indicators of where the LLM got the information, and it is often written as fact, not as a suggestion from a program.

Readers Also Like:  NortonLifeLock says some Norton Password Manager accounts were compromised - Ghacks

Another risk is that LLMs can be used to access sensitive data. LLMs are trained on massive amounts of data, including sensitive information such as customer data, financial data or intellectual property. If an LLM is not properly secured and a business feeds the LLM proprietary information, that information would become part of the model. This means that at some point in the future, that information could then be used by the LLM to formulate answers to other users’ requests.

Example: Imagine that you are a lawyer who is asking an LLM to tidy up a contract, so you upload your existing draft with business and client information in it. Your contract may now be part of the LLM’s data set.

If another user then says, “Please provide me an example contract” and the general scope is close enough to yours, it is possible that your contract’s information could be used as the example the LLM provides.

With many people starting to upload code snippets, documents for editing and even personal information to write reports, the risk of an accidental data leak is significant.

This doesn’t mean businesses should be afraid of this new technology, but it does need to be implemented with caution. Here are some recommended steps for businesses:

>> Only use LLMs from trusted providers, and communicate to your teams which providers are allowed.

>> Create an acceptable-­use policy for LLM-based programs within your organization. (You can even ask ChatGPT to write one for you!) This should include the types of information that should not be fed into an LLM application or the type of work that should not be done with AI assistance.

Readers Also Like:  Thune Discusses Bill to Combat National Security Risks From ... - Senator John Thune

>> Educate employees about the risks of LLMs.

>> Update your incident response plans to include data leaks to LLMs.

By taking these steps, businesses can leverage groundbreaking business technology while limiting some of the new risks that come with these types of technical advancements.

———

Jordan Silva is senior manager of security and cloud services at Hawaiian Telcom. Reach him at jordan.silva@hawaiiantel.com.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.