security

What does ChatGPT 'think' about its own security? – Digital Journal


ChatGPT — © AFP INDRANIL MUKHERJEE

ChatGPT, the new AI chatbot launched in November 2022 and is being used by students, forming business plans, generating code and more. ChatGPT is providing some exciting developments in the technology world, but as with anything, it has its risks.

Recent headlines, many are questioning its utility, including its ability to disrupt education. In a world of increasing cyberattacks and breaches, knowing the risk is paramount to the safety of ordinary people and businesses alike.

Looking into these areas is JP Perez-Etchegoyen, CTO of leading cybersecurity company Onapsis. Perez-Etchegoyen seeks to cut through the noise surrounding ChatGPT by detailing what he thinks the future holds for the platform and whether the benefits outweigh the risks.

Understanding ChatGPT

In order to get a clearer idea of the risks and rewards, Perez-Etchegoyen states that it is important to get a better understanding of what ChatGPT is and what it is capable of.

ChatGPT (the latest version is ChatGPT-4) was released on March 14th, 2023. It is part of a larger family of AI tools developed by the US-based company OpenAI. Trained using both supervised and reinforcement learning techniques, the application can do far more than most chatbots.

According to Perez-Etchegoyen: “As part of its responses, it can generate content based on all the information it was trained on. That information includes general knowledge as well as programming languages and code. As a result, it can, for instance, simulate an entire chat room; play games like tic-tac-toe; and simulate an ATM.”

He adds: “More importantly, for businesses and other large organisations, it can help improve businesses’ customer service through more personalised, accurate messaging. It can even write and debug computer programs. Some of those, and other, features mean that it could both be a cybersecurity ally and a threat.”

Education, filtering, and bolstering defences

On the positive front, one of the most valuable roles it could play is also one of the most simple: spotting phishing. Perez-Etchegoyen notes: “Organisations could entrench a habit in their employees whereby they use ChatGPT to determine if any content they’re not sure about is phishing or if it was generated with malicious intent.”

Perez-Etchegoyen continues: “There are numerous other ways that ChatGPT can be used to bolster cybersecurity efforts. It could, for example, provide a degree of assistance to more junior security workers, whether that’s in communicating any issues they might have or helping them better understand the context of what they’re meant to be working on at any given point. It could also help under-resourced teams curate the latest threats and in identifying internal vulnerabilities.”

The bad guys are using it too

However, cybercriminals are also promoting how to us ChatGPT to their advantage. Perez-Etchegoyen says: “They might, for example, make use of its ability to generate malicious code. Alternatively, they might use it to generate content that appears to be human-generated, potentially used to trick users into clicking on malicious links, unknowingly leading to dangerous consequences.”

In another example, Perez-Etchegoyen  finds: “Some are even using ChatGPT to convincingly mimic legitimate AI assistants on corporate websites, opening up a new avenue in the social engineering battlefront. Remember, the success of cybercriminals largely depends on being able to target as many possible vulnerabilities, as frequently and quickly as possible. AI tools like ChatGPT allow them to do that by essentially acting as a supercharged assistant that can help create all assets needed for malicious campaigns.”

What does ChatGPT-4 say?

Perez-Etchegoyen asked ChatGPT “What could prevent malicious actors using ChatGPT with malicious intent” it and this was the answer:

“There are several measures that could prevent malicious actors from using ChatGPT with malicious intent:

1. Access controls: OpenAl, the organization behind ChatGPT, has implemented access controls to limit who can access their API and use their technology. By restricting access, they can prevent malicious actors from using their technology for malicious purposes.

2. Monitoring and detection: OpenAl monitors the usage of their technology to detect and prevent malicious activities. They use machine learning models to identify patterns and anomalies in usage that could indicate potential misuse.

3. Ethical guidelines: OpenAl has published guidelines for responsible use of their technology, which outline best practices and ethical considerations. By following these guidelines, users can ensure that they are using the technology in a responsible and ethical manner.

4. User education: Education and awareness can help prevent the misuse of technology.

OpenAl provides resources and educational materials to help users understand the capabilities and limitations of their technology, as well as the potential risks of misuse.

5. Legal consequences: There are legal consequences for using technology like chatGPT for malicious purposes. Governments and law enforcement agencies have laws and regulations in place to punish individuals and organizations that use technology for illegal activities.

Overall, preventing malicious actors from using ChatGPT with malicious intent requires a combination of technical controls, ethical guidelines, user education, and legal consequences. It is important to use Al language models like ChatGPT responsibly and ethically to prevent the technology from being misused for harmful purposes.”

In other words, ChatGPT went ahead and enumerated all of the measures that OpenAI has taken in order to prevent its misuse.

OpenAI in the release blog of ChatGPT-4 explained how this model is supposedly safer than its predecessors: “We spent 6 months making GPT-4 safer and more aligned. GPT-4 is 82% less likely to respond to requests for disallowed content and 40% more likely to produce factual responses than GPT-3.5 on our internal evaluations”.

Perez-Etchegoyen concludes: “All indicates that the right measures continue being improved to avoid its misuse, but as we all know the bad guys will continue to find ways to overcome those limitations as the model is more widely used and understood.”



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.