security

Tech Matters: Security tips for artificial intelligence tools – Standard-Examiner



Photo supplied

Leslie Meredith

With artificial intelligence platforms like OpenAI’s ChatGPT, Google’s Gemini (previously called Bard) and Microsoft’s CoPilot making headlines around all the ways they can make you more productive, not a lot has been said about their potential risks. Last week, researchers revealed a theoretical worm — a piece of malware that could move quickly through a network of PCs — they had developed that showed how these AI systems could be exploited. But that’s just one risk. Let’s take a look at several ways, including a worm, that could jeopardize your data.

Before we get started, it’s important to put these risks in perspective. There are security hazards associated with all online activities, and it’s no different with ChatGPT and its competitors. Despite email safeguards, you know a scam can still land in your inbox, but that doesn’t stop you from using email. Instead, you learn how to avoid scams to protect yourself online. It’s no different with AI services — we’re just at an earlier stage with this type of technology.

First, the AI worms. As reported by Wired, a group of researchers at Cornell Tech created one of the first generative AI worms, which can spread from one system to another, potentially stealing data or deploying malware along the way. “It basically means that now you have the ability to conduct or to perform a new kind of cyberattack that hasn’t been seen before,” said Ben Nassi, one of the Cornell Tech researchers.

Readers Also Like:  Security researcher warns of chilling effect after feds search phone at airport - TechCrunch

The team used an AI email assistant in the lab, not one that’s available to the public, to carry out their tests. They found they could write what they called an adversarial self-replicating prompt for the email assistant designed to steal a user’s personal information and inject malware into the user’s computer by circumventing safeguards built into ChatGPT and Gemini, the AI systems used to create the email assistants. The prompt is regenerated every time it’s used to generate an email response. The researchers said they expect to see these worms “in the wild” within the next two to three years.

Security experts reviewing the Cornell paper said there are ways to defend against this type of exploit using traditional security approaches. Developers of AI tools should not trust LLM (Large Language Model, which is what AI systems such as ChatGPT are called) output anywhere in your application. Further, keeping humans in the loop is imperative — every AI agent action should require approval. “You don’t want an LLM that is reading your email to be able to turn around and send an email,” Adam Swanda, a threat researcher at Robust Intelligence, said to Wired. “There should be a boundary there.”

For the user, security measures are similar to the ones you would use for selecting an app. As more and more AI tools are built on top of the LLMs, it is critical that you vet them before using or downloading them. Start by getting your GPTs from a reliable source. Like with Apple’s App Store and Google Play, Open AI launched its GPT Store in January of this year. Here, paying subscribers to ChatGPT4 can browse more than 3 million GPTs (customized chatbots, designed for a specific purpose). Among trending apps you’ll find Canva, the popular graphic design tool, and Wolfram, the AI offshoot of numbers-based Wolfram Alpha, along with Cartoonize Yourself, Scholar GPT and Slide Maker. Stick with the most popular GPTs to limit any risk.

Readers Also Like:  Buffalo, Rochester, Syracuse are tech hub - Spectrum News

It is not clear how OpenAi vets its GPTs at this time. The company says it “takes security seriously and implements measures to protect the models and user data. However, users should still take precautions to safeguard their interactions with the model and any sensitive information they provide as input.”

GPTs that use APIs are riskier than those that don’t. An API is a set of rules or protocols that let software applications communicate with each other to exchange data, features and functionality. If a GPT uses an API to process your request within the GPT, it is unclear what this third party does with the information you have provided. It may be shared with other third parties, used for other purposes than what you intended and could be stored for an indefinite amount of time.

Be just as careful with prompts. If you see a prompt on a website or social media that you’d like to try, type it out instead of copying and pasting. This way you won’t inadvertently copy hidden malicious code.

Finally, pay extra attention to unsolicited emails. Scammers are just as eager to increase the effectiveness of their scams by using AI systems and assistants as you are to increase your productivity.

Leslie Meredith has been writing about technology for more than a decade. As a mom of four, value, usefulness and online safety take priority. Have a question? Email Leslie at asklesliemeredith@gmail.com.



Newsletter

Join thousands already receiving our daily newsletter.




READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.