security

China warns of artificial intelligence risks, calls for beefed-up national security measures – ABC News


China’s ruling Communist Party is calling for beefed-up national security measures, highlighting risks from artificial intelligence

A meeting headed by party leader and President Xi Jinping on Tuesday urged “dedicated efforts to safeguard political security and improve the security governance of internet data and artificial intelligence,” the official Xinhua News Agency said.

China needs a “new pattern of development with a new security architecture,” Xinhua reported Xi as saying.

China already dedicates vast resources to suppressing any perceived political threats to the party’s dominance, with spending on the police and security personnel exceeding that devoted to the military.

While it relentlessly censors in-person protests and online criticism, citizens have continued to express dissatisfaction with policies, most recently the draconian lockdown measures enacted to combat the spread of COVID-19.

China has been cracking down on its tech sector in an effort to reassert party control, but like other countries it is scrambling to find ways to regulate the developing technology.

Worries about artificial intelligence systems outsmarting humans and slipping out of control have intensified with the rise of a new generation of highly capable AI chatbots such as ChatGPT.

Readers Also Like:  Email security is a bigger stress for businesses than ever before - TechRadar

Scientists and tech industry leaders, including high-level executives at Microsoft and Google, issued a new warning Tuesday about the perils that artificial intelligence poses to humankind.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the statement said.

The missive said AI poses “profound risks to society and humanity,” and some involved in the topic have proposed a United Nations treaty to regulate the technology.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.