enterprise

Coming AI regulation may not protect us from dangerous AI


Check out all the on-demand sessions from the Intelligent Security Summit here.


Most AI systems today are neural networks. Neural networks are algorithms that mimic a biological brain to process vast amounts of data. They are known for being fast, but they are inscrutable. Neural networks require enormous amounts of data to learn how to make decisions; however, the reasons for their decisions are concealed within countless layers of artificial neurons, all separately tuned to various parameters. 

In other words, neural networks are “black boxes.” And the developers of a neural network not only don’t control what the AI does, they don’t even know why it does what it does. 

This a horrifying reality. But it gets worse.

Despite the risk inherent in the technology, neural networks are beginning to run the key infrastructure of critical business and governmental functions. As AI systems proliferate, the list of examples of dangerous neural networks grows longer every day. For example: 

Event

Intelligent Security Summit On-Demand

Learn the critical role of AI & ML in cybersecurity and industry specific case studies. Watch on-demand sessions today.


Watch Here

These outcomes range from deadly to comical to grossly offensive. And as long as neural networks are in use, we’re at risk for harm in numerous ways. Companies and consumers are rightly concerned that as long as AI remains opaque, it remains dangerous.

A regulatory response is coming

In response to such concerns, the EU has proposed an AI Act — set to become law by January — and the U.S. has drafted an AI Bill of Rights Blueprint. Both tackle the problem of opacity head-on. 

The EU AI Act states that “high-risk” AI systems must be built with transparency, allowing an organization to pinpoint and analyze potentially biased data and remove it from all future analyses. It removes the black box entirely. The EU AI Act defines high-risk systems to include critical infrastructure, human resources, essential services, law enforcement, border control, jurisprudence and surveillance. Indeed, virtually every major AI application being developed for government and enterprise use will qualify as a high-risk AI system and thus will be subject to the EU AI Act.

Readers Also Like:  How did Georgia Tech's 2024 commits perform in week three? - Sports Illustrated

Similarly, the U.S. AI Bill of Rights asserts that users should be able to understand the automated systems that affect their lives. It has the same goal as the EU AI Act: protecting the public from the real risk that opaque AI will become dangerous AI. The Blueprint is currently a non-binding and therefore toothless white paper. However, its provisional nature might be a virtue, as it will give AI scientists and advocates time to work with lawmakers to shape the law appropriately.

In any case, it seems likely that both the EU and the U.S. will require organizations to adopt AI systems that provide interpretable output to their users. In short, the AI of the future may need to be transparent, not opaque.

But does it go far enough?

Establishing new regulatory regimes is always challenging. History offers us no shortage of examples of ill-advised legislation that accidentally crushes promising new industries. But it also offers counter-examples where well-crafted legislation has benefited both private enterprise and public welfare.

For instance, when the dotcom revolution began, copyright law was well behind the technology it was meant to govern. As a result, the early years of the internet era were marred by intense litigation targeting companies and consumers. Eventually, the comprehensive Digital Millennium Copyright Act (DMCA) was passed. Once companies and consumers adapted to the new laws, internet businesses began to thrive and innovations like social media, which would have been impossible under the old laws, were able to flourish. 

The forward-looking leaders of the AI industry have long understood that a similar statutory framework will be necessary for AI technology to reach its full potential. A well-constructed regulatory scheme will offer consumers the security of legal protection for their data, privacy and safety, while giving companies clear and objective regulations under which they can confidently invest resources in innovative systems.

Unfortunately, neither the AI Act nor the AI Bill of Rights meets these objectives. Neither framework demands enough transparency from AI systems. Neither framework provides enough protection for the public or enough regulation for business.

Readers Also Like:  Tesla Move Signals Elon Musk Fallout; 4 Stocks In Buy Zones - Investor's Business Daily

A series of analyses provided to the EU have pointed out the flaws in the AI Act. (Similar criticisms could be lobbied at the AI Bill of Rights, with the added proviso that the American framework isn’t even intended to be a binding policy.) These flaws include:

  • Offering no criteria by which to define unacceptable risk for AI systems and no method to add new high-risk applications to the Act if such applications are discovered to pose a substantial danger of harm. This is particularly problematic because AI systems are becoming broader in their utility.
  • Only requiring that companies take into account harm to individuals, excluding considerations of indirect and aggregate harms to society. An AI system that has a very small effect on, e.g., each person’s voting patterns might in the aggregate have a huge social impact.
  • Permitting virtually no public oversight over the assessment of whether AI meets the Act’s requirements. Under the AI Act, companies self-assess their own AI systems for compliance without the intervention of any public authority. This is the equivalent of asking pharmaceutical companies to decide for themselves whether drugs are safe — a practice that both the U.S. and EU have found to be detrimental to the public. 
  • Not well defining the responsible party for the assessment of general-purpose AI. If a general-purpose AI can be used for high-risk purposes, does the Act apply to it? If so, is the creator of the general-purpose AI responsible for compliance, or is the company that puts the AI to high-risk use? This vagueness creates a loophole that incentivizes shifting blame. Both companies can claim it was their partner’s responsibility to self-assess, not theirs.

For AI to safely proliferate in America and Europe, these flaws need to be addressed. 

What to do about dangerous AI until then

Until appropriate regulations are put in place, black-box neural networks will continue to use personal and professional data in ways that are completely opaque to us. What can someone do to protect themselves from opaque AI? At a minimum: 

  • Ask questions. If you are somehow discriminated against or rejected by an algorithm, ask the company or vendor, “Why?” If they cannot answer that question, reconsider whether you should be doing business with them. You can’t trust an AI system to do what’s right if you don’t even know why it does what it does.
  • Be thoughtful about the data you share. Does every app on your smartphone need to know your location? Does every platform you use need to go through your primary email address? A level of minimalism in data sharing can go a long way toward protecting your privacy.  
  • Where possible, only do business with companies that follow the best practices for data protection and which use transparent AI systems.
  • Most important, support regulation that will promote interpretability and transparency. Everyone deserves to understand why an AI impacts their lives the way it does.
Readers Also Like:  Why Tech Isn't Synonymous With Growth Anymore: Everything Risk - Bloomberg

The risks of AI are real, but so are the benefits. In tackling the risk of opaque AI leading to dangerous outcomes, the AI Bill of Rights and AI Act are charting the right course for the future. But the level of regulation is not yet robust enough.

Michael Capps is CEO of Diveplane.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.