security

What threats could AI advancements pose to human life? | Opinion – Deseret News


The International Atomic Energy Agency serves as the world’s forum for encouraging the peaceful use of nuclear energy. The Federal Aviation Administration governs civil aviation to ensure safety in the skies. The Food and Drug Administration regulates “the safety, efficacy, and security of human and veterinary drugs, biological products, medical devices, our nation’s food supply, cosmetics, and products that emit radiation,” according to its website

The need for these organizations is straightforward. Any technology or product that might put the public’s safety at risk should adhere to basic standards.

So, it seems not only logical, but critical, that an agency — preferably one with an international scope — be established with regulatory oversight over emerging artificial intelligence technology.

A number of prominent voices have sounded alarms recently, warning of the dangers AI can pose if its development is left unchecked.

Most recently, Geoffrey Hinton, an AI pioneer who, together with two of his students, constructed a computerized neural network in 2012 allowing machines to teach themselves to recognize common objects, stepped down from Google and began sounding alarms.

In an interview with the New York Times this week, he said one obvious danger is that people with bad intentions could hijack the technology. Already, so-called “deep fake” videos can make politicians and other influencers appear to say things they never said. Beyond that, the technology could be used to create lethal weapons or germs, thwart security systems and steal money electronically. 

The journal Nature recently reported on an experiment to see whether AI could be used to “design toxic molecules.” Researchers at a conference of the Swiss Federal Institute for Nuclear, Biological and Chemical Protection were surprised to find that, “In less than 6 hours after starting on our in-house server, our model generated 40,000 molecules that scored within our desired threshold.”

Readers Also Like:  Fujitsu launches blockchain collaboration tech to build Web3 services - Fujitsu

Some of these new molecules, the researchers said, “were predicted to be more toxic … than publicly known chemical warfare agents.

But Hinton spoke of a second, perhaps more worrisome possibility. If computers learn to generate their own code, and to run that code on their own, they might create super robots that could inflict harm or even kill.

This sounds like a bad science fiction movie plot, but plenty of scientists are warning about it with a straight face.

Writing in Time magazine, Eliezer Yudkowsky, a decision theorist who leads research at the Machine Intelligence Research Institute, said “the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die.”

The solution would be to program some sort of a sense of caring or empathy into the basic framework of AI; but, he said, no one knows how to do this. Absent that, computers that could think many times faster than humans could also devise ways to keep us from dismantling them.

Such warnings may sound like theoretical hyperventilating. Healthy skepticism is always warranted when it comes to doomsday scenarios. But the risks to public safety are not science fiction. Those risks are as pedestrian as airplanes, cars and the many other sometimes dangerous machines and technologies that merit oversight to ensure public safety.

As of this week, 27,565 people had signed an open letter calling for a moratorium on “the training of AI systems more powerful than GPT-4.” Many prominent scientists, computer innovators and entrepreneurs have signed, including Apple co-founder Steve Wozniak, Cambridge Centre for the Study of Existential Risk executive director Sean O’Heigeartaigh, Berkeley computer science professor Stuart Russell and SpaceX founder Elon Musk, who also has publicly called for regulatory oversight.

Readers Also Like:  Russia pummels Ukraine with array of high-tech weaponry in ... - CNN

The letter says, “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”

That seems like a reasonable approach, although hard to enforce. Also, six months may not be enough, given the risks outlined.

Humans have long wrestled to keep morality and ethics a step ahead of emerging technologies. A regulatory agency may have difficulty keeping a lid on research. Unlike with nuclear weapons productions, AI research and development in far-away lands cannot be easily detected using satellite surveillance. 

But it’s clearly important that the world draws clear lines and that a regulatory body be established to enforce those boundaries. AI holds great promise, including even cures to diseases. But the technology should also be treated with the utmost care. The stakes are too high for anything less.





READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.