security

Companies Should Ask These Risk Questions When Procuring AI … – Bloomberg Law


As businesses increasingly consider artificial intelligence tools to augment, supplement, or replace a variety of functions, it’s crucial to update risk-management frameworks to reflect best procurement practices.

Failing to do so can lead companies to adopt what seems like an AI panacea, but is actually a Pandora’s box of regulatory enforcement and litigation risks.

Businesses should look to the World Economic Forum’s guidelines for adopting AI responsibly and consider the following questions when procuring AI tools.

Do I Understand the Data?

AI tools can seem complicated, but they’re only as robust as the data they’re trained on. Businesses should seek assurances from their AI vendors on collecting, using, and disclosing data used to train the model. Vendors should demonstrate they secured all the consent necessary when collecting data from consumers under applicable law.

Businesses also should vet the AI tool’s data usage and training methods when onboarding it. And vendors should detail their governance programs, audits, and other mechanisms that ensure the tool’s usability, reliability, and potential for bias, inaccuracies, and unfairness.

When inputting company data, businesses need to understand how the vendor will use that data for training purposes, and should think through the possible use cases for internal data that may or may not be uploaded.

Finally, a company should understand the tool’s accessibility to company data, and ask whether the data could be collected and reviewed in litigation, if necessary.

Have I Considered Regulatory Scrutiny?

The Department of Justice, Federal Trade Commission, and other regulators are focused on whether technology companies and their tools create anti-competitive environments or put consumers at a disadvantage.

Readers Also Like:  CrowdStrike, Fortinet shares rise in broad security tech rally - Seeking Alpha

Given the powerful insights AI tools can provide, regulators are concerned with the harm they may have on consumers. An AI tool could be used in marketing and pricing strategies to accurately predict a specific consumer’s spending capacity, for example. The business could then ensure every widget is sold at the highest price that each individual consumer is willing to pay.

Regulators have also focused on how bias and inaccuracies in AI output disadvantage certain categories of consumers. AI tools can expand opportunity for anti-competitive market collaboration.

Take an element of antitrust law that has long been settled: Companies can’t collude to set future prices but can share historical prices. If an AI tool combs through a massive volume of competitors’ historical price data, “the distinctions between past and current or aggregated versus disaggregated data may be eroded,” Principal Deputy Assistant Attorney General Doha Mekki warned this year. “Where competitors adopt the same pricing algorithms, our concern is only heightened.”

By thinking how the data sources could cross a line concerning competitive information, businesses can proactively navigate evolving regulatory risks.

Have I Mitigated Security Risks?

Cyberattacks on AI vendors have ” potential to impact the integrity of the AI model’s decisions and predictions,” World Economic Forum’s guidelines say. Data duplicated into an AI tool is vulnerable to access by a bad actor, and companies should exercise caution when inputting personally identifiable information.

Depending how the tools are integrated, they could create a new back door to company systems. This is particularly true for AI tools that “crawl” through systems looking for places to create efficiencies and where AI tools can make certain fetch or “get” requests for data.

Readers Also Like:  HHS Office for Civil Rights and the Federal Trade Commission Warn ... - HHS.gov

It is essential to understanding the vendor’s cybersecurity defenses, including what proactive steps it takes to detect attacks and how its incident response plan would minimize the effects of a breach.

Did I Include Best Practices in the Contract?

Businesses should ensure they have appropriate clauses in contracts with AI vendors to address usage of provided data, data retention and destruction, intellectual property rights, security breaches, and other standard contractual clauses.

Special security measures should be in place for certain categories of data, such as limiting or encrypting data based on personal information that implicates data privacy regulations.

Additionally, World Economic Forum guidelines suggest including a compliance statement in master service agreements, such as the Responsible Artificial Intelligence Institute certification, which aligns with current AI regulations and principles.

The guidelines also suggest businesses develop their own key performance indicators. Businesses should further consider whether policies providing guidance to employees on proper use are warranted.

AI is rapidly changing how the world does business. To maximize the promise of AI while minimizing its risks, companies should be diligent in the proactive assessment of AI tools and protect themselves through each contract.

This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law and Bloomberg Tax, or its owners.

Author Information

Sarah Hutchins is partner at Parker Poe and leads the firm’s cybersecurity and data privacy team.

Debbie Edney is counsel at Parker Poe and has experience representing corporate, financial, and individual clients in all aspects of complex commercial litigation.

Readers Also Like:  I’m a security expert – criminals already love Mark Zuckerberg’s new app Threads and why you’re in danger... - The US Sun

Robert Botkin is an associate at Parker Poe and helps clients navigate data privacy and security issues across industries.

Write for Us: Author Guidelines



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.