Artificial intelligence, and ChatGPT in particular, has exploded worldwide. And so has the potential for misuse or abuse of AI, which presents a risk that must be taken seriously. However, AI also presents a range of potential benefits for society and individuals alike. Brad Fisher, CEO of Lumenova AI, explains how and why the application of responsible AI is both a technology debate and a business conundrum.
AI is a hot topic, thanks to ChatGPT. People and organizations have begun to consider its myriad use cases enthusiastically, but there is also an undercurrent of concern for the possible risks and limitations. With this rush toward implementation, Responsible AI (RAI) has come to the forefront, and companies are questioning whether it is a technology or a business matter. I think it’s both.
According to an MIT Sloan white paper published in September 2022, the world is at a time when AI failures are beginning to multiply, and the first AI-related regulations are coming online. The report states that while both developments lend urgency to the efforts to implement responsible AI programs, we have seen that companies leading the way on RAI are not driven primarily by regulations or other operational concerns. Instead, their research suggests that leaders take a strategic view of RAI, emphasizing their organizations’ external stakeholders, broader long-term goals and values, leadership priorities, and social responsibility.
This aligns with the point of view that Responsible AI is both a technology issue and a business issue. Clearly, the potential issues manifest themselves inside the AI technology, so that is front and center. But the reality is that the standards for what is and is not acceptable for AI are unclear.
For example, we all agree that AI needs to be “fair,” but whose definition of “fair” should we use? It is a company-by-company decision, and when you get into the details, it isn’t easy to make.
The “both technology and business issue” approach is an important one as most only assess the technical aspects. Assessing and fully automating Responsible AI from both a business and a technical perspective helps bridge the gap between the two. This is especially true for heavily regulated industries. The NIST Framework for AI announced just last week provides useful guidelines to help organizations assess and address their needs for Responsible AI.
Let’s take a deeper dive and explore further.
See More: Can Advanced AI Make Investing More Secure for Everyone?
What Is Responsible AI?
AI can discriminate and create bias. AI models can be trained on data that contains inherent biases and can perpetuate existing biases in society. For instance, if a computer vision system is trained on images of mainly white people, it may be less accurate at recognizing people of other races. Similarly, AI algorithms used in hiring processes can be biased because they are trained on datasets of resumes from past hires, which may be biased in terms of gender or ethnicity.
Responsible AI is an approach to artificial intelligence (AI) designed to ensure that AI systems are used ethically and responsibly. This approach is based on the idea that AI should be used to benefit people and society and must be developed with consideration for ethical, legal, and regulatory considerations. Responsible AI involves the use of transparency, accountability, fairness, and safety measures to ensure that AI systems are used responsibly. Such measures could include the use of AI auditing and monitoring, the development of ethical codes of conduct, the use of data privacy and security measures, and the adoption of measures to ensure that AI is used in a manner consistent with human rights.
Where Is Responsible AI Most Needed?
Early adopters of AI are banking/finance, insurance, healthcare and other heavily regulated industries, including telecom and heavy consumer-facing (retail, hospitality/travel, etc.) sectors. Let’s break it down by industry:
- Banking/Finance: AI can be used to process large amounts of customer data to better understand customer needs and preferences, which can then be used to improve customer experience and provide more tailored services. AI can also be used to identify fraud and suspicious activities, automate processes, and provide more accurate and timely financial advice.
- Insurance: AI can be used to better understand customer data and behavior to provide more personalized insurance coverage and pricing. AI can also be used to automate the claims process and streamline customer service operations.
- Healthcare: AI can be used to identify patterns in medical data and can be used to diagnose diseases, predict health outcomes, and provide personalized treatment plans. AI can also be used to automate administrative and operational tasks, such as patient scheduling and insurance processing.
- Telecom: AI can be used to provide better customer service by analyzing customer data and understanding customer needs and preferences. AI can also be used to automate customer service processes, such as troubleshooting and billing.
- Retail: AI can be used to personalize customer experiences by analyzing customer data and understanding customer needs and preferences. AI can also be used to automate inventory management and customer service operations.
- Hospitality/ Travel: AI can be used to automate customer service processes, such as online booking and customer service. AI can also be used to analyze customer data and provide personalized recommendations.
How to Regulate Responsible AI?
Government regulation of AI is the set of rules and regulations that governments enforce to ensure that the development and use of artificial intelligence (AI) is safe, ethical, and lawful. Regulations vary between countries, but they typically involve setting standards of ethics, safety, security, and legal liabilities for any harm caused by AI systems. Governmental regulatory bodies may also require developers to be trained in safety and security protocols and to ensure that their products are designed with best practices in mind. Additionally, governments may provide incentives for companies to create AI systems that are beneficial to society, such as those that assist in the fight against climate change.
The National Institute of Standards and Technology (NIST) is an agency of the U.S. Department of Commerce that develops and promotes standards, guidelines, and related technology to advance innovation and economic growth. As part of its mission, NIST has developed the NIST AI Framework to provide a set of principles and activities for organizations to use to deploy and manage AI applications. The NIST AI Framework is based on the concept of a “four-tiered” approach to advancing AI applications and technologies.
The four tiers are:
- Foundational: Establishing the necessary policies, processes, and infrastructure to support successful AI initiatives.
- Governance: Establishing clear standards, responsibilities, and decision-making processes to ensure ethical, responsible, and effective use of AI.
- Trustworthiness: Ensuring AI applications are designed and deployed in a manner that is transparent, secure, reliable, and resilient.
- Impact: Measuring the impact of AI applications on society, including economic, environmental, and social considerations. The NIST AI Framework is designed to help organizations systematically evaluate and improve their AI initiatives and develop and maintain trust in their AI applications. It is also intended to help organizations that are new to AI understand the basic principles, processes, and technologies involved. The NIST AI Framework has been endorsed by the White House, the U.S. Department of Commerce, and the U.S. Department of Defense.
By incorporating the NIST Framework into their Responsible AI initiatives, companies can ensure that their AI systems meet the necessary standards and regulations while also reducing their risk of data breaches and other security issues. This is an important step in the journey to Responsible AI, as it helps ensure that organizations are equipped to manage their AI systems in a responsible and secure manner. In addition, the NIST Framework can also be used as a guide to help organizations identify and implement best practices for using AI technologies, such as machine learning and deep learning. In summary, Responsible AI is both a technology issue and a business issue.
The NIST Framework can help organizations assess and address their needs for Responsible AI while providing a set of standards, guidelines, and best practices to help ensure that their AI systems are secure and compliant. Early adopters of the NIST Framework include heavily regulated industries and those which are heavily consumer-facing.
A Mundane New World?
ChatGPT is putting a spotlight on AI, but less than 5% of actual use cases will be ‘brave new world’ scenarios. AI is still a relatively new technology, and most use cases currently focus on more practical applications, such as predictive analytics, natural language processing, and machine learning. While ‘brave new world’ scenarios are certainly possible, many of the current AI-powered applications are designed to improve existing systems and processes rather than disrupt them.
Responsible AI is both a technology issue and a business issue. As technology advances, businesses must consider the ethical implications of using AI and other automated systems in their operations. They must consider how these technologies will impact their customers and employees and how they might use them responsibly to protect data and privacy. Additionally, businesses must ensure they are compliant with applicable laws and regulations when it comes to using AI and other automated systems and that they are aware of the potential risks associated with using such technologies.
The future of Responsible AI is bright. As technology continues to evolve, companies are beginning to recognize ethical AI’s importance and incorporate it into their operations. Responsible AI is becoming increasingly important for businesses to ensure that the decisions they make are ethical and fair. AI can be used to create transparent and explainable products while also considering the human and ethical implications of decisions. Additionally, responsible AI can be used to automate processes to help companies make decisions faster, with less risk, and with greater accuracy. As technology continues to advance, companies will increasingly rely on responsible AI to make decisions and create products that are safe, secure, and beneficial to their customers and the world.
The potential for misuse or abuse of artificial intelligence (AI) presents a risk that must be taken seriously. However, AI also presents a range of potential benefits for society and individuals alike, and it is important to remember that AI is only as dangerous as the intentions of the people who use it.
What are your thoughts on responsible AI? How can we implement it across sectors and business departments? Share with us on Facebook, Twitter, and LinkedIn.
MORE ON AI:
Image Source: Shutterstock