security

ChatGPT is bringing advancements and challenges for cybersecurity – Help Net Security


Understanding why ChatGPT is garnering so much attention takes a bit of background. Up until recently, AI models have been quite “dumb”: they could only respond to specific tasks when trained on a large dataset providing context on what to find. But, over the last five years, research breakthroughs have taken AI to a whole new level, enabling computers to better understand the meaning behind words and phrases.

ChatGPT cybersecurity challenges

Leveraging these mechanics and 5 large language models (LLMs), ChatGPT can translate the human language into dynamic and useful machine results. In essence, it allows users to “speak” to their data. It’s not yet perfect, but it’s a major advancement in AI, and we can expect other technology companies to soon release competing models.

As with any new technology, ChatGPT can be used for both good and bad – and this has major implications for the world of cybersecurity. Here’s what we can expect over the coming months.

ChatGPT will advance the cybersecurity industry

ChatGPT is a gold mine of insight that removes much of the work involved in research and problem-solving by enabling users to access the entire corpus of the public internet with just one set of instructions. This means, with this new resource at their fingertips, cybersecurity professionals can quickly and easily access information, search for answers, brainstorm ideas and take steps to detect and protect against threats more quickly. ChatGPT has been shown to help write code, identify gaps in knowledge and prepare communications – tasks that enable professionals to perform their daily job responsibilities much more efficiently.

Readers Also Like:  Lumen shakes up telecom industry with Network-as-a-Service offering - PR Newswire

In theory, ChatGPT and similar AI models should help close the cybersecurity talent shortage by making individual security professionals significantly more effective – so much so, in fact, that with AI, one person will be able to accomplish the same output as multiple individuals before. It should also help reduce the cybersecurity skills gap by enabling even junior personnel with limited cybersecurity experience to get the answers and knowledge they need almost instantaneously.

From a business standpoint, ChatGPT will inform a generation of similar AI tools that can help companies access and use their own data to make better decisions. Where a team and a series of database queries responds today, a chatbot with an AI engine may respond tomorrow. Additionally, because the technology can take on menial, data-driven tasks, organizations may soon reallocate personnel to focus on different initiatives or partner with an AI to add business value.

Bad actors have access, too

Unfortunately, cybersecurity professionals and businesses aren’t the only parties that can benefit from ChatGPT and similar AI models – cybercriminals can, too. And we’re already seeing bad actors turn to ChatGPT to make cybercrime easier – using it for coding assistance when writing malware and to craft believable phishing emails, for example.

The scary thing about ChatGPT is that it is excellent in imitating human writing. This gives it the potential to be a powerful phishing and social engineering tool. Using the technology, non-native speakers will be able to craft a phishing email with perfect spelling and grammar. And it will also make it much easier for all bad actors to emulate the tone, word selection and style of writing of their intended target – which will make it harder than ever for recipients to distinguish between a legitimate and fraudulent email.

Readers Also Like:  Biden’s adviser meets China foreign minister in bid to ease tensions - Al Jazeera English

Last but certainly not least, ChatGPT lowers the barrier to entry for threat actors, enabling even those with limited cybersecurity background and technical skills to carry out a successful attack.

Ready or not, here it comes

Whether we like it or not, ChatGPT and next-generation AI models are here to stay, which presents us with a choice: we can be afraid of the change and what’s to come, or we can adapt to it and ensure we embrace it holistically by implementing both an offensive and defensive strategy.

From an offensive perspective, we can use it to empower workers to be more productive and empower the business to make better decisions. From a defensive standpoint we need to put a strategy in place that protects our organizations and employees from the evolving security risks stemming from this new technology – and this includes updating policies, procedures and protocols to protect against AI-enabled bad actors.

ChatGPT and AI are changing the game for both security professionals and cybercriminals, and we need to be ready. Being aware of the opportunities and challenges associated with this new technology and then putting a holistic strategy in place will help you leverage this new era of AI to drive your business. Ignoring these developments puts it at risk.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.