security

ChatGPT bug bounty pays up to $20k to report security flaws – The American Genius


ChatGPT may be one of the most talked—and written—about AI tool out there, among dozens, including other writing AI tools, text-to-art AI systems and music-composing AI software. Heck, you can even enlist the help of an AI dating coach or an AI personal assistant. However, because ChatGPT has become so popular, is capable of doing SO MUCH, and mines a mind-boggling amount of written work to create “new” text, it is bound to have growing pains. 

The powers that be at OpenAI are fully aware of the risks of potential bugs and breaches. Therefore, the OpenAI announcement states, “This initiative is essential to our commitment to develop safe and advanced AI. As we create technology and services that are secure, reliable, and trustworthy, we need your help.”

Bug Bounty 101

Security bugs are one of the biggest concerns to OpenAI, the parent company to ChatGPT. Vulnerabilities to hacking are especially worrisome, because ChatGPT saves user’s data and cannot or will not yet remove it. Therefore, any confidential information is vulnerable to being leaked by the software if someone busts into the system. In an effort to shore up their security defenses and batten down the proverbial hatches, OpenAI is sending out a clarion call for “the contributions of ethical hackers who help us uphold high privacy and security standards for our users and technology.” 

Show me the money

They intend to make it worthwhile for these ethical hackers, these online AI bounty hunters. Discovering and immediately reporting ChatGPT security vulnerabilities pays out anywhere from $200 to $6,500 per issue, for a maximum reward to any individual of $20,000.00. The issues must be reported ASAP on OpenAI’s Bugcrowd Program. In addition, OpenAI will acknowledge and credit anyone who discovers unique vulnerabilities deemed valid and in-scope.

Readers Also Like:  HP Chief Throws About AI Fairy Dust in Hopes of Reviving ... - Slashdot

The OpenAI bug bounty team promises to review bug reports quickly and reply to all submissions. They will use the Bugcrowd Vulnerability Rating Taxonomy to determine the category and value of each vulnerability found, though they also reserve the right to adjust the value as determined by a human review. 

Advertisement. Scroll to continue reading.

Check out these big buts

Because AI programs mine existing content that it uses to “write” supportive text such as research papers, performance reviews, cover letters, and exams, it teeters on an ethical tightrope. Thus OpenAI’s handlers are smart to seek the help of tech-savvy computer demigods, AKA ethical hackers, in helping them prevent security breaches. 

However, because the stakes are so high and the rewards are pretty sweet, there are some hard and fast rules, or examine Open AI’s big buts. Read the buts! Those wanting to participate must read the full program details, covering but not excluded to, these issues: 

  • Expectations
  • Rules of Engagement
  • Model Issues
  • Scope and Rewards
  • Out-of-scope issues
  • In-scope issues

This generation of AI programs is progressing rapidly. This isn’t your geriatric millennial’s clunky chatbot. As these companies continue to develop and enhance their AI tools, the moral and legal gray areas will also grow. My inner ingénue applauds these efforts to make ChatGPT a little more secure. The inner cynic in me finds it a bit sus but is glad to see that they are being proactive and doing something about the risks.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.