Opinions

India can take the lead in advancing global AI governance



On Monday, Joe Biden issued an executive order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.. This order is premised on eight ‘guiding principles and priorities’ to advance and govern the development and use of AI that intends to establish new standards for:

  • AI safety and security.
  • Privacy protection.
  • Advancing equity and civil rights.
  • Protecting consumers and workers.
  • Promoting innovation and competition.
  • Advancing global US leadership in the technology.

On the same day, the G7 Hiroshima AI Process released the Hiroshima Process International Guiding Principles for Organisations Developing Advanced AI Systems, and the Hiroshima Process International Code of Conduct for Organisations Developing Advanced AI Systems, based on the progress made by relevant ministerial deliberations for the last few months.Both these developments come only three days after UN Secretary-General Antonio Guterres launched the AI Advisory Body on risks, opportunities and international governance of AI, and two days before the first AI Safety Summit, convened by the British government, takes place in London. Amid all these developments, the EU has already passed the AI Act and is now discussing it with member nations for its final adoption.

Clearly, AI has taken the centre-stage of the global tech ecosystem and shaped recent geopolitical diplomacy, surpassing the efforts around wider cyberspace issues. In March this year, a 27,000-plus signatures letter sought that all AI labs ‘immediately pause’ their system training for at least six months to AI-related work. Existential concerns of AI were also raised by industry experts like computer scientist and cognitive psychologist Geoffrey Hinton and OpenAI CEO Sam Altman in May, and there has been a flurry of ‘wake ups’ to analyse the possible threats of AI and balance it with the enormous possibilities when used responsibly.

Readers Also Like:  Catch fraudsters, But after due process

Much of these concerns arise from the myriad possibilities ranging from generating deepfakes and misinformation to force-multiplying cyberattacks on critical infrastructures. These have raised alarm levels and the demand for stricter regulations.

Everyone was looking towards the US. Most Big AI is being conducted within US corporations. They have invested significantly in AI projects across domains and are in a ruthless competition against each other. The Biden administration had already started engaging on AI with stakeholders based on its ‘Blueprint for an AI Bill of Rights’ released in October 2022.

On the other hand, the US had also leapfrogged to the strategic underpinnings of AI, and had brought in restrictions on tech-related exports to China via the two announcements – one in October last year and the other this year.

In Biden’s executive order on Monday, the timelines provided across the eight guiding principles and priorities indicate a clear exercise that the US proposes to conduct to ensure that developments in AI continue, but remain a controlled affair across the public and private sector.

Making companies that are responsible for the foundation model they make report the training of the AI model and the results under the tenets of the Defence Production Act is a prudent step. This model can have an impact on national and economic security, or public health and safety. This move rightly elevates matters for the industry beyond the self-regulatory approach.

However, there would be many rogue companies or agencies in and outside of the US that could skip this requirement and yet have models running. Likewise, despite its detailed pitch, the steps enunciated to reduce the impact of AI on privacy, cybersecurity and fake content may fall short of its optimal results. This is because AI-related content could be still generated and circulated from other geographies.

Readers Also Like:  Timothy boon, shot down a balloon...

At the same time, protection of displaced workers and imbibing AI systems could be difficult to implement. However, the effective government use of AI could be jump-started, provided it is done in a way by which the government is more responsive to citizens with the proposed graded system of dates for implementing various functions.

To be effective, the broad measures in Biden’s executive order also need to come to the table for global discussions. The US needs to use Britain’s two-day AI summit at Bletchley Park starting tomorrow to get the larger discussions initiated. It could back Britain’s pitch for leadership in this area armed with the Hiroshima Process announcement.

Not to lose sight of the fact that currently India holds the leadership of the 29-member Global Partnership on Artificial Intelligence (GPAI), which will host the annual summit next month in New Delhi. GPAI and India could be suitably pitched to take the global deliberations on responsible AI forward as nations absorb this week’s White House executive order, and the Downing Street-organised summit brings stakeholders responsibly together.

The writer is visiting scholar, Ostrom Workshop, Indiana University Bloomington, US, and former India head, General Dynamics



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.