security

AI helps humans speed app modernization, improve security – TechTarget


The desire to modernize application development is causing many organizations to look to generative AI to assist human teams in building or refactoring applications.

But modernization isn’t the only challenge that augmenting humans with AI can solve — it can also help address the skills gap and improve cybersecurity.

Using generative AI in cybersecurity

The “2023 Technology Spending Intentions Survey” from TechTarget’s Enterprise Strategy Group revealed that the top skills gap is in cybersecurity. AI-powered automation through tools such as ChatGPT can revolutionize the way organizations with limited staff protect themselves against cyberthreats. By harnessing the power of generative AI, organizations can enhance application security and threat-detection capabilities, streamline security operations and improve overall resilience.

Some security tools use generative AI like ChatGPT to automate script responses for faster remediation. We’ve seen ChatGPT gain interest because its natural language processing capabilities enable it to understand and respond to commands and queries. This has generated interest in how we could apply generative AI to help application security, especially for security teams that may not have coding experience.

Using ChatGPT to automate scripts offers several advantages. It enables faster incident response because the AI model can quickly analyze and interpret security logs, identify potential threats and initiate appropriate actions. This rapid response time is crucial to minimize the impact of a cyber attack and mitigate potential damage.

Generative AI automation capabilities can also help eliminate human error and biases, enhancing the overall accuracy of security operations. The model can consistently follow predefined protocols and security best practices, reducing the likelihood of oversight or misconfiguration.

Readers Also Like:  Massive ransomware operation targets VMware ESXi: How to protect from this security threat - TechRepublic

Applying AI is also imperative in today’s rapidly evolving threat landscape to help SecOps teams. As cyber threats grow in sophistication and scale, organizations will need to use AI to bolster their defense mechanisms. AI-powered cybersecurity products can analyze vast amounts of data, identify patterns and detect anomalies that might indicate a potential breach or malicious activity. These advanced systems can autonomously monitor network traffic, endpoints and user behavior, enabling real-time threat detection and response. By using AI algorithms and machine learning techniques, cybersecurity professionals can stay one step ahead of cybercriminals and proactively defend against emerging threats.

Complex threat analysis and interpreting contextual information is difficult to address with AI models alone.

This should help increase efficiency so operations teams can work more effectively. AI can be used as a valuable tool for managing security incidents, performing vulnerability assessments and conducting threat intelligence analysis. Automating routine tasks and decision-making processes lets security teams focus on the more complex and strategic aspects of cybersecurity, maximizing efficiency and reducing response times.

Organizations also can achieve better scalability and cost efficiency in their cybersecurity operations by using ChatGPT. The model can handle a high volume of security tasks simultaneously, enabling security teams to effectively manage a large number of endpoints and security events. This scalability is particularly valuable for organizations with complex infrastructures or those experiencing rapid growth. The automation of scripts through ChatGPT also reduces the need for manual intervention, optimizing resource utilization and minimizing operational costs.

Using AI responsibly is key

While we are optimistic about using AI for modernization efforts and cybersecurity, it is important to understand that AI could create issues or incorrect actions. AI needs to be used responsibly; it is difficult to replace human expertise and oversight. Human involvement remains crucial for strategic decision-making. Complex threat analysis and interpreting contextual information is difficult to address with AI models alone. Collaboration between AI systems such as ChatGPT and security professionals ensures the best outcomes, combining the strengths of human intelligence and machine learning.

This information just scratches the surface of exploring how to use AI. Here are a number of resources he also recommended to help organizations think about how to successfully apply these exciting new technologies:

Generative AI


Modernization trends from Enterprise Strategy Group

This video is part of a series on modernization trends. In it, Enterprise Strategy Group’s Melinda Marks, analyst on cloud and application security, and Paul Nashawaty, analyst for infrastructure and application modernization, talk to Alessandro Perilli, AI researcher at Synthetic Work, about how to use AI to reduce mundane, tedious tasks to help organizations cope with the skills gaps so staff members and teams can work more efficiently.

An industry veteran, inventor, book author and speaker, Perilli has been an entrepreneur with a tech consultancy and media companies. He was an industry analyst and an executive at Red Hat, where he helped build out its strategy around IT automation for security orchestration and AI for predictive analytics. He currently runs Synthetic Work, a website and newsletter about the intersection of AI and humans.

Enterprise Strategy Group is a division of TechTarget. Its analysts have business relationships with technology vendors.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.