security

What Generative AI Reveals About the Limits of Technological … – Tech Policy Press


Dr. Joe Bak-Coleman is an associate research scientist at the Craig Newmark Center for Journalism Ethics and Security at Columbia University and an RSM assembly fellow at the Berkman Klein Center’s Institute for Rebooting Social Media.

March 1940 meeting of scientists developing the atomic bomb in the Radiation Laboratory at Berkeley, California: Ernest O. Lawrence, Arthur H. Compton, Vannevar Bush, James B. Conant, Karl T. Compton, and Alfred L. Loomis. Wikimedia

Over the past month, generative AI has ignited a flurry of discussion about the implications of software that can generate everything from photorealistic images to academic papers and functioning code. During that time period, mass adoption has begun in earnest, with generative AI integrated into everything from Photoshop and search engines to software development tools.

Microsoft’s Bing has integrated a large language model (LLM) into its search feature, complete with hallucinations of basic fact, oddly manipulative expressions of love, and the occasional “Heil Hitler.” Google’s Bard has fared similarly– getting textbook facts about planetary discovery wrong in its demo. A viral image of the pope in “immaculate drip” created by Midjourney even befuddled experts and celebrities alike who, embracing their inner Fox Mulder, just wanted to believe.

Even in the wake of Silicon Valley Bank’s collapse and slowdown in the tech industry, the funding, adoption, and embrace of these technologies appears to have occurred before their human counterparts could generate– much less agree on– a complete list of things to be concerned about. Academics have raised the alarm about plagiarism and the proliferation of fake journal articles. Software developers are concerned about erosion of already-dwindling jobs. Ethicists worry about the ethical standing and biases of these agents, and election officials fear supercharged misinformation.

Readers Also Like:  Samsung Bans ChatGPT Use for Company Amid Security Concerns ... - The prNews Blog

Even if you believe most of these concerns are mere moral panics, or that the benefits outweigh the costs– it’s a premature conclusion given the lists of potential risks, costs and benefits are growing by the hour.

In any other context, this is the point in the op-ed where the writer would normally wax poetic about the need for regulators to step in and put a pause on things while we sort it out. To do so would be hopelessly naïve. The Supreme Court is currently deciding whether a 24 year old company can face liability for deaths that occurred 8 years ago under the text of a 27 year old law. It’s absurd to expect Deus ex Congressus.

The truth is, these technologies are going to become part of our daily lives—whether we like it or not. Some jobs will get easier, some will simply cease to exist. Marvelous and terrible things will happen, with effects that span the breadth of human experience. I have little doubt there will be a human toll, and almost certainly deaths– all it takes is a bit of hallucinated medical advice, anti-vaccine misinformation, biased decision-making, or new paths to radicalization. Even GPS claimed its share of lives; it’s absurd to think generative AI will fare any better. All we can do at the moment is hope it isn’t too bad and react when it is.

Yet the utterly ineffective chorus of concern from regulators, academics, technologists, and even teachers raises a broader question– where’s the line on the adoption of new technologies?

Readers Also Like:  FTX's Celebrity Endorser Tom Brady Faces Worthless Stock, Lawsuits - Slashdot

With few exceptions, any sufficiently lucrative technology we’ve developed has found its way into our world– mindlessly altering the world we live in without regard to human development, well-being, equity, or sustainability. In this sense, the only unique thing about generative AI is that it is capable of articulating the risks of its adoption– to little effect.

So what does unadoptable technology look like? Self-driving cars are a rare case of gradual adoption, but that is due in part to the easier-to-litigate liability of a full self-driving car sending its owner into the rear bumper of a parked semi. When the connection between the technology and its harms is more indirect, it is difficult to conjure examples where we’ve exercised caution.

In this sense, the scariest thing about generative AI is that it has revealed our utter lack of guardrails against harmful technology, even when concerns span the breadth of human experience. Here, as always, our only choice is to wait until journalists and experts uncover harm, gather evidence too compelling to be undermined by PR firms and lobbyists, and convince polarized legislatures to enact sensible and effective legislation. Ideally, this happens before the technology becomes obsolete and replaced with some new fresh hell.

The alternative to this cycle of adoption, harm, and delayed regulation is to collectively decide where we draw the line. It might make sense to start with the extreme, say control of nuclear arms, and work our way from doomsday to everyday. Or we can simply take Google Bard’s answer:

Readers Also Like:  Former Sony execs laying down data security gambit to tech giants - Japan Today

“A technology that can cause serious harm and has not been adequately tested or evaluated should be paused indefinitely prior to adoption.”



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.