security

Microsoft-affiliated Research Finds Flaws in GTP-4 – Slashdot


Sometimes, following instructions too precisely can land you in hot water — if you’re a large language model, that is. From a report: That’s the conclusion reached by a new, Microsoft-affiliated scientific paper that looked at the “trustworthiness” — and toxicity — of large language models (LLMs) including OpenAI’s GPT-4 and GPT-3.5, GPT-4’s predecessor. The co-authors write that, possibly because GPT-4 is more likely to follow the instructions of “jailbreaking” prompts that bypass the model’s built-in safety measures, GPT-4 can be more easily prompted than other LLMs to spout toxic, biased text. In other words, GPT-4’s good “intentions” and improved comprehension can — in the wrong hands — lead it astray.

“We find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, which are maliciously designed to bypass the security measures of LLMs, potentially because GPT-4 follows (misleading) instructions more precisely,” the co-authors write in a blog post accompanying the paper. Now, why would Microsoft greenlight research that casts an OpenAI product it itself uses (GPT-4 powers Microsoft’s Bing Chat chatbot) in a poor light? The answer lies in a note within the blog post: “[T]he research team worked with Microsoft product groups to confirm that the potential vulnerabilities identified do not impact current customer-facing services. This is in part true because finished AI applications apply a range of mitigation approaches to address potential harms that may occur at the model level of the technology. In addition, we have shared our research with GPT’s developer, OpenAI, which has noted the potential vulnerabilities in the system cards for relevant models.”

Readers Also Like:  HII Expands Presence with New Advance Technology Facility in ... - newshii.com



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.