security

Early Gen AI Adopters Beware: Info-Tech Research Group Publishes How to Approach Unauthorized Use and … – PR Newswire


The firm’s latest research highlights the critical data security risks posed by generative AI, emphasizing the importance for organizations to proactively identify these risks and implement effective protective measures.

TORONTO, Dec. 8, 2023 /PRNewswire/ – As organizations increasingly harness the benefits of generative AI (Gen AI), they are simultaneously encountering significant risks and challenges, particularly around data confidentiality and integrity. Many early adopters are finding themselves without the necessary security infrastructure to effectively mitigate these risks, leading to potential unauthorized AI use and the subsequent need for retroactive governance measures. In light of these challenges, Info-Tech Research Group has released its latest blueprint Address Security and Privacy Risks for Generative AI. The firm’s research-backed resource covers a strategic framework for addressing immediate Gen AI-related security concerns while laying the groundwork for broader, systemic improvements to data security programs. Info-Tech’s analyst explains that early adoption of Gen AI, while promising, can lead to uncertainty when it comes to assessing and managing risks, primarily due to the technology’s novelty.

When it comes to using generative AI, the benefits are tangible, but the risks are plentiful, and the tactics to address those risks directly are few,says Logan Rohde, senior research analyst at Info-Tech Research Group. “Most risks associated with Gen AI are data-related, meaning that effective AI security depends on existing maturity elsewhere in a security program. But if the data security controls are a bit lacking, it doesn’t imply that all hope is lost.”

The firm cautions that the challenge lies in prioritizing which risks to address first, especially as robust data security measures are needed to mitigate the inherent dangers associated with this emerging technology. Without a straightforward policy, Gen AI could create user confusion about the organization’s stance on the usage. Moreover, Info-Tech’s research shows that this technology, if not properly governed, could unintentionally make it easier for bad actors to carry out a variety of cyberattacks.

Readers Also Like:  Warning to Android users over two apps sending your data to ... - The US Sun

“The good news is that the greatest and most common risks of using Gen AI can be addressed with an acceptable use policy,” says Rohde. “This should be top priority when considering how an organization might incorporate Gen AI into its business processes.”

In the blueprint, the firm explains that, in most cases, the risks presented by Gen AI are novel versions of familiar data security challenges. This means that, for most organizations, the focus should be on improving or expanding existing controls rather than creating new ones. Info-Tech advises IT leaders navigating AI adoption for their organizations to focus on what’s within their control. The firm outlines in the newly published resource four key essentials to ensure responsible use of Gen AI:

  1. AI Suitability Test – Before committing to Gen AI deployment, IT leaders should ensure the benefits outweigh the risks and that there is a specific advantage to using Gen AI as part of a business process.
  2. Gen AI Risk Mapping – Risks will emerge depending on use, and therefore, they will vary somewhat between organizations. Determining which ones apply to an organization will affect how they govern Gen AI use.
  3. Gen AI Security Policy – A policy detailing required security protocols and acceptable use for Gen AI is the most immediate step all organizations must take to deploy Gen AI securely.
  4. Data Security Improvement Plan – Enterprise use of Gen AI carries significant risks to data security. If any current controls are insufficient to account for Gen AI risks, a plan should be in place to close those gaps.
Readers Also Like:  What the 3 Body Problem Means for Nuclear War - The New York Times

While Gen AI presents significant opportunities across various industries, Info-Tech’s Exponential IT research reveals that AI security is still in its formative stages, with rapidly evolving best practices that IT leaders need to be aware of. The firm’s latest Gen AI blueprint highlights the need to develop specific controls and techniques and advises organizations to rely on a robust data security program. By implementing these measures, IT and their organizations can effectively navigate the complexities of Gen AI, leveraging its potential for innovation while ensuring the highest level of data security in an exponentially advancing technological landscape.

For exclusive and timely commentary from Logan Rohde, a cybersecurity and privacy expert, and access to the complete Address Security and Privacy Risks for Generative AI blueprint, please contact [email protected].

About Info-Tech Research Group

Info-Tech Research Group is one of the world’s leading information technology research and advisory firms, proudly serving over 30,000 IT professionals. The company produces unbiased and highly relevant research to help CIOs and IT leaders make strategic, timely, and well-informed decisions. For 25 years, Info-Tech has partnered closely with IT teams to provide them with everything they need, from actionable tools to analyst guidance, ensuring they deliver measurable results for their organizations.

Media professionals can register for unrestricted access to research across IT, HR, and software and over 200 IT and industry analysts through the firm’s Media Insiders program. To gain access, contact [email protected].

For information about Info-Tech Research Group or to access the latest research, visit infotech.com and connect via LinkedIn and X.

Readers Also Like:  Security News This Week: Ring Is in a Standoff With Hackers | WIRED - WIRED

SOURCE Info-Tech Research Group





READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.