enterprise

Why self-regulation of AI is a smart business move


Head over to our on-demand library to view sessions from VB Transform 2023. Register Here


ChatGPT and other text- and image-generating chatbots have captured the imagination of millions of people — but not without controversy. Despite the uncertainties, businesses are already in the game, whether they’re toying with the latest generative AI chatbots or deploying AI-driven processes throughout their enterprises.

That’s why it’s essential that businesses address growing concerns about AI’s unpredictability — as well as more predictable and potentially harmful impacts to end users. Failure to do so will undermine AI’s progress and promise. And though governments are moving to create rules for AI’s ethical use, the business world can’t afford to wait. 

Companies need to set up their own guardrails. The technology is simply moving too fast — much faster than AI regulation, not surprisingly — and the business risks are too great. It may be tempting to learn as you go, but the potential for making a costly mistake argues against an ad hoc approach. 

Self-regulate to gain trust

There are many reasons for businesses to self-regulate their AI efforts — corporate values and organizational readiness, among them. But risk management may be at the top of the list. Any missteps could undermine customer privacy, customer confidence and corporate reputation. 

Event

VB Transform 2023 On-Demand

Did you miss a session from VB Transform 2023? Register to access the on-demand library for all of our featured sessions.

 


Register Now

Fortunately, there’s much that businesses can do to establish trust in AI applications and processes. Choosing the right underlying technologies — those that facilitate thoughtful development and use of AI — is part of the answer. Equally important is ensuring that the teams building these solutions are trained in how to anticipate and mitigate risks. 

Success will also hinge on well-conceived AI governance. Business and tech leaders must have visibility into, and oversight of, the datasets and language models being used, risk assessments, approvals, audit trails and more. Data teams — from engineers prepping the data to data scientists building the models — must be vigilant in watching for AI bias every step of the way and not allow it to be perpetuated in processes and outcomes.

Readers Also Like:  Nvidia stock slips after earnings, forecasts top estimates amid 'incredible' demand for its next-gen chip - Yahoo Finance

Risk management must begin now

Organizations may eventually have little choice but to adopt some of these measures. Legislation now being drafted could eventually mandate checks and balances to ensure that AI treats consumers fairly. So far, comprehensive AI regulation has yet to be codified, but it’s only a matter of time before that happens. 

To date in the U.S., the White House has released a “Blueprint for an AI Bill of Rights,” which lays out principles to guide the development and use of AI — including protections against algorithmic discrimination and the ability to opt out of automated processes. Meanwhile, federal agencies are clarifying requirements found in existing regulations, such as those in the FTC Act and the Equal Credit Opportunity Act, as a first line of AI defense for the public.

But smart companies won’t wait for whatever overarching government rules might materialize. Risk management must begin now.  

AI regulation: Lowering risk while increasing trust

Consider this hypothetical: A distressed person sends an inquiry to a healthcare clinic’s chatbot-powered support center. “I’m feeling sad,” the user says. “What should I do?”

It’s a potentially sensitive situation and one that illustrates how quickly trouble could surface without AI due diligence. What happens, say, if the person is in the midst of a personal crisis? Does the healthcare provider face potential liability if the chatbot fails to provide the nuanced response that’s called for — or worse, recommends a course of action that may be harmful? Similar hard-to-script — and risky — scenarios could pop up in any industry.

This explains why awareness and risk management are a focus of some regulatory and non-regulatory frameworks. The European Union’s proposed AI Act addresses high-risk and unacceptable risk use cases. In the U.S., the National Institute of Standards and Technology’s Risk Management Framework is intended to minimize risk to individuals and organizations, while also increasing “the trustworthiness of AI systems.”

Readers Also Like:  Stock Market today: Dow clinches record high again as big tech kicks off earnings - MSN

How to determine AI trustworthiness?

How does anyone determine if AI is trustworthy? Various methodologies are arising in different contexts, whether the European Commission’s Guidelines for Trustworthy AI, the EU’s Draft AI Act, the U.K.’s AI Assurance Roadmap and recent White Paper on AI Regulation, or Singapore’s AI Verify. 

AI Verify seeks to “build trust through transparency,” according to the Organization for Economic Cooperation and Development. It does this by providing a framework to ensure that AI systems meet accepted principles of AI ethics. This is a variation on a widely shared theme: Govern your AI from development through deployment. 

Yet, as well-meaning as the various government efforts may be, it’s still crucial that businesses create their own risk-management rules rather than wait for legislation. Enterprise AI strategies have the greatest chance of success when some common principles — safe, fair, reliable and transparent — are baked into the implementation. These principles must be actionable, which requires tools to systematically embed them within AI pipelines.

People, processes and platforms

The upside is that AI-enabled business innovation can be a true competitive differentiator, as we already see in areas such as drug discovery, insurance claims forecasting and predictive maintenance. But the advances don’t come without risk, which is why comprehensive governance must go hand-in-hand with AI development and deployment.

A growing number of organizations are mapping out their first steps, taking into account people, processes and platforms. They’re forming AI action teams with representation across departments, assessing data architecture and discussing how data science must adapt.

How are project leaders managing all this? Some start with little more than emails and video calls to coordinate stakeholders, and spreadsheets to document and log progress. That works at a small scale. But enterprise-wide AI initiatives must go further and capture which decisions are made and why, as well as details on models’ performance throughout a project’s lifecycle. 

Readers Also Like:  Deceit 2 goes free-to-play ahead of console launch on April 3

Robust governance the surest path

In short, the value of self-governance arises from documentation of processes, on the one hand, and key information about models as they’re developed and at the point of deployment, on the other. Altogether, this provides a complete picture for current and future compliance.

The audit trails made possible by this kind of governance infrastructure are essential for “AI explainability.” That comprises not only the technical capabilities required for explainability but also the social consideration — an organization’s ability to provide a rationale for its AI model and implementation.   

What this all boils down to is that robust governance is the surest path to successful AI initiatives — those that build customer confidence, reduce risk and drive business innovation. My advice: Don’t wait for the ink to dry on government rules and regulations. The technology is moving faster than the policy.

Jacob Beswick is director of AI governance solutions at Dataiku

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.