security

Status of Ethical Standards in Emerging Tech – InformationWeek


It’s no secret that emerging tech is packing big punches these days. The question is whether those virtual fists protect or harm. Judging by the number of nervous conversations surrounding the recent ChatGPT launch, most people intuitively grasp that advanced technologies such as AI can swing both ways. Despite the widespread flinching in the presence of clear dangers, a recent Deloitte survey found that “90% of respondents lack ethical guidelines to follow” while designing or using emerging technologies. But is that due to an absence of guidelines and rules, or despite them?

Neither, according to most industry watchers. The problem stems from a prevailing lack of awareness of existing guidelines and unintended consequences.

“The governance of the use of emerging technologies is challenging for a number of reasons, mainly because we are not fully aware of all the consequences, intended and unintended, that the use of emerging technologies bring. In fact, more often than not, we are not fully aware of the uses of emerging technologies, as the uses are developed at the same time as the emerging technologies,” says Esperanza Cuenca-Gómez, Head of Strategy and Outreach at Multiverse Computing, a quantum and AI software and computing provider.

“The question, thus, is very much about how we can devise mechanisms to be able to see beyond the event horizon,” Cuenca-Gómez adds.

The focus in evaluating emerging technologies tends to center on the immediate ramifications and foreseeable possibilities.

“Almost as soon as ChatGPT took the general public by storm, experts have been quick to point out the problematic aspects of the technology. Mainly, the ethical implications of potentially not being able to tell if content was generated by a human or a machine, and who owns content generated by a machine?,” says Muddu Sudhakar, CEO of Aisera, an AI services company.

“These are significant issues that will need to be resolved, preferably sooner rather than later,” Sudhakar says. “But we are probably at least 20 years away before the government will enforce the moral obligation with regulation.”

Sudhakar likens this situation to the path of HTTP cookies which, for decades and counting, record user data and activities while the user is on a website. Yet it was only “in the last five years or so that websites were required to ask users to agree to cookies before continuing activity on the site,” says Sudhakar, even though the moral obligation was clear from the outset. He warns, as does OpenAI, the company behind ChatGPT, that the technology is neither controlled nor regulated and that it commonly generates statements containing factual errors. The potential for “spreading misinformation or sharing responses that reinforce biases” remains an unchecked concern as its use continues to spread like wildfire.

Readers Also Like:  Live updates | World Economic Forum gathering in Davos - ABC News

Ethical Standards and Regulations

While these technologies appear to be running amok among us, there are some guidelines out there, although many of them are very recent efforts. Most “emerging” technologies, on the other hand, “are not new but the scale at which they are being adopted is unprecedented,” says Sourabh Gupta, CEO of Skit.ai, an augmented voice intelligence platform.

This chasm between invention and accountability is the source of much of the angst, dismay, and danger.

“It is much better to design a system for transparency and explainability from the beginning rather than to deal with unexplainable outcomes that are causing harm once the system is already deployed,” says Jeanna Matthews, professor of computer science at Clarkson University and co-chair of the ACM US Technology Committee’s Subcommittee on AI & Algorithms.

To that end, the Association for Computing Machinery’s global Technology Policy Council (TPC) released a new Statement on Principles for Responsible Algorithmic Systems authored jointly by its US and Europe Technology Policy Committees in October 2022. The statement includes nine instrumental principles: Legitimacy and Competency; Minimizing Harm; Security and Privacy; Transparency; Interpretability and Explainability; Maintainability; Contestability and Auditability; Accountability and Responsibility; and Limiting Environmental Impacts, according to Matthews.

In December 2022, the European Council of the European Union (EU) proposed a regulation named the “Artificial Intelligence Act.”

“This is a first-of-its-kind effort to create a region-wide legal framework for ethical artificial intelligence applications. This proposed legislation will cover all the emerging technologies using machine learning, AI, predictive coding, and AI-powered data analytics,” says Dharmesh Shingala, CEO of Knovos, an ediscovery and IT product provider.

The World Economic Forum stepped up earlier with a set of standards and guidelines in its work titled Quantum Computing Governance Principles by the World Economic Forum, published in January 2022. The principles it contains “establish a good foundation upon which to develop policies, either in the form of laws or internal policies for companies,” according to Cuenca-Gómez.

Additionally, the US has a new blueprint for an “AI Bill of Rights” as of October 2022, and the UK’s Centre for Data Ethics and Innovation published a “Roadmap to an effective AI assurance ecosystem” in December 2021. Other countries are similarly attempting to craft something in the way of guidance, albeit generally in a piecemeal fashion meaning in separate regulations for each technology such as AI, quantum computing, and autonomous cars.

Readers Also Like:  How Can Consumers Easily Improve Their Cyber Security Habits?

Some strong ideas for governance can also come from guidelines and standards developed for data use, such as the Open Data Foundation’s data governance framework, as most of the emerging technologies run on massive amounts of data and therefore that is where many of the issues lie.

Further, professional organizations and standards bodies are tackling the issues, too.

“IEEE’s vision for prioritizing human well-being with autonomous and intelligent systems (AI/S) led it to innovate in conceiving socio-technical standards and frameworks, embodied in the ethically aligned design that combines universal human values, data agency, and technical dependability, with a set of principles to guide AI and A/IS creators and users. This has resulted in the IEEE 7000 series of standards, and the verification of the ethicality of AI solutions with the CertifAIEd AI Ethics conformity assessment program,” says Konstantinos Karachalios, Managing Director of IEEE SA.

Guidance for the Trailblazers

For those companies striking out on their own to forge responsible creation and use of emerging technologies, there are some core issues that can light the way.

“There are three factors contributing to ethical dilemmas in emerging technologies, such as AI: unethical usage, data privacy issues, and biases,” says Gupta.

The issues of unethical usage are often not well understood in terms of their impact on society. Minimal to no guidelines exist. Unethical outcomes are a common result even with minimized biases or other known issues.

Data privacy issues arise in AI models that are built and trained on user data from multiple sources and then used, either knowingly or unknowingly, in ways that disregard individual rights to privacy.

Biases arise in models, during training and design, in their reflection of the real world. These are typically very difficult to find and correct and often result in skewed and unfair experiences to various groups.

But you can also hire professionals to help sort out ethical issues.

“Bringing ethicists and philosophers into the discussions, as well as reflecting upon the works of the great ethicists and moral philosophers of all times, is key to develop internal policies and legal frameworks that govern the use of emerging technologies,” says Cuenca-Gómez in regards to developing internal company policy.

He cited two examples of excellent sources for this, in his opinion: the discipline of futures design, pioneered by Anthony Dunne and Fiona Raby; and the integration of futures design in strategic planning, developed by Amy Webb, professor of strategic foresight at New York University’s Stern School of Business.

Readers Also Like:  DOJ investigating TikTok owners for possible surveillance of US ... - WLS-TV

Industry leaders and groups are also working to build best practices to serve as guidance until regulators settle on more formal approaches.

“Many organizations leading in responsible AI — which we define as the practice of designing, developing and deploying AI that safely and fairly impacts people and society, while building trust with impacted users — have an agreed-upon set of ethical AI principles that they follow. And many have made these principles public,” says Ray Eitel-Porter, Global Lead for Responsible AI at Accenture.

The most important step, Eitel-Porter says, is “translating these policies into effective governance structures and controls at critical points in the organization.” He cites as example, embedding AI model development control points into the model development lifecycle or machine learning operations (MLOps) approach. But he also advocates non-technical, business decisions be subject to review and approval prior and after implementation. However, Accenture’s recent survey of 850 senior executives globally found only 6% have built a Responsible AI foundation and put principles into practice.

Regulation and Standards: The Death of Innovation?

While emerging regulations and standards are perceived by some as the one-two punch that will knock out innovation and throttle emerging technologies, that’s highly unlikely to be the case.

“The rise in AI regulation has raised concerns that it may stifle innovation. But it doesn’t have to. Businesses should view AI regulation — as planned for by the European Union and advocated in US — as a fence at the edge of a dangerous cliff. It lets companies know just how far they can push their innovation rather than holding back out of uncertainty. The best way for businesses to prepare for AI regulation is to become responsible by design,” says Eitel-Porter.

Other industry leaders and organizations agree.

“Traditionally, organizations protect themselves by the use of reputable international standards and conformity assessment processes that align well with regulatory expectations. The IEEE CertifAIEd program is an industry consensus certification program built to benefit the ecosystem,” says Ravi Subramaniam, IEEE SA Director, Head of Business Development.

What to Read Next:

ChatGPT: An Author Without Ethics

Quick Study: Artificial Intelligence Ethics and Bias

IBM’s Krishnan Talks Finding the Right Balance for AI Governance



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.