security

As new AI tools arrive, state tech officials prepare restrained use – StateScoop


The surge of interest in large language models following the recent arrival of ChatGPT and similar platforms is spurring state decision-makers to prepare for a future in which AI plays a larger role in how government works. But they’re treading gingerly.

State chief information officers and other technology officials told StateScoop that lawmakers are calling on their offices for guidance, both to ensure that their states are taking advantage of AI’s power — and to establish guardrails that might prevent its misuse. And in many states, tech officials are itching to draft policies for an emerging technology that’s received billions of dollars in investment from the likes of Amazon, Google and Microsoft.

CIOs are moving fast, realizing that even without policies in place, many state government employees are already using generative AI in their daily work.

“Every day we’re applying it. We’re using it,” Nailor said of ChatGPT. “I don’t care if it’s the [chief information security officer] and he’s crafting talking points on something around cybersecurity because maybe he thinks technically and a large language model can give him talking points distilled down for his audience, to other uses. We use it quite often to turn technical speak, to help us inform how to better provide a specific audience with information.”

Other state CIOs shared similar anecdotes, though none copped to their staffs using generative AI to create documents the public might see or — more consequentially — to directly influence policies or processes. Massachusetts CIO Jason Snyder recently told Government Technology his office integrated limited functionality of ChatGPT into its website chatbots and productivity suite — but only for natural language and translation functions, not to generate text in direct response to public input.

Hawaii CIO Douglas Murdock warned that states should draft careful policies before AI wedges itself too deeply into their workflows.

“I think that’s one of the things we have to put in place before we allow widespread use of any of the AI,” Murdock told StateScoop. “What’s the approval process? And I think, very important, where is AI going to look for information? It has to be controlled so it doesn’t go places it shouldn’t go. Just like we control our own employees and we control the public, we have to control the AI from running around on the network.”

‘On their best day’

The prevalence of AI in government is hazy in part because technology companies are happy to conflate nearly anything as AI, including machine learning, deep learning, artificial neural networks and even algorithms that have nothing at all to do with AI.

Readers Also Like:  Friend.tech look-alike 'Alpha' emerges on Bitcoin network - Cointelegraph

But states are using AI. Modern cybersecurity tools used by many agencies use AI to help monitor for network intrusion and detect malware. AI is also used to make forecasts in some corners of government. Vermont’s transportation department, for example, applies machine learning models to images of roadways to predict where it’ll have to patch the asphalt.

Vermont’s AI director, Josiah Raiche, told StateScoop this sort of “lite-AI” has been used for years. But impressions of AI are changing fast, as new techniques and technologies — like ChatGPT — become available. He said Vermont is testing at least two new uses for AI, including chatbot integration, a use case named by many other tech officials.

“What we’re trying to do is take the knowledge in our call center reps, people who’ve been there for 20 years and have helped thousands of people, fill out this form and turn that into some FAQs for each question,” he said, adding that he hopes not to replace those employees, but relieve them of repetitive tasks.

Though many officials said they’re excited about the potential of the new tech, they’re also wary of unleashing AI on the public without tight controls. Connecticut CIO Mark Raymond pointed to the recent case in which a Belgian man died by suicide after weeks of corresponding with a chatbot on the topic of climate change as illustrating one of the big open questions about AI: Who will be held responsible when things go wrong?

Raymond said he believes large language models can help state employees work more efficiently and make it easier for the public to more easily find things online, but noted that Connecticut, like Vermont, will be offering the public only a curated set of responses pre-generated by AI models rather than a large language free-for-all.

“My general take is that the technology has the promise to act like humans do on their best day, all the time, if we train it to do so,” Raymond said. “But we as humans have biases, and if we train it on the worst of us, we could potentially set it up to do some bad things.”

Other AI systems used by government, such as facial recognition, have been shown to commonly exhibit biases based on the data used to train them. Such systems developed by Western nations typically favor European face shapes, for example, while some developed in China have been show to have poorer success rates with non-Asian faces.

Connecticut Gov. Ned Lamont cited some of those concerns earlier this month when he signed legislation putting stronger governance around the state’s current and future uses of AI, including mandating creation of an “AI bill of rights.”

Readers Also Like:  3 Ways the public sector can improve security in order to enhance ... - TechNative

Mixing worlds

Former Delaware CIO Jason Clarke told StateScoop that while large language models are promising for government, he’s most concerned about them being applied too broadly. If a chatbot has free access to resident data across state agencies, it might, for example, remind someone applying for a fishing license that he’s also behind on child-support payments. Not everyone will appreciate that, Clarke said.

“You start to mix those worlds, and from a government and a private sector, we want to achieve the same thing,” Clarke said. “We want to make the customer experience great, but we’re really playing for different stakes. I think it has to be very intentional and locked down to the service that the individual is leveraging. If you go out and beyond that scope, you start to create scenarios where you’re mixing worlds, we’re mixing information that the end-user themselves may not be accepting of.”

But the stringent governance needed to keep AI from misbehaving also undermines much of the technology’s power. Using a large language model to generate chatbot responses, for example, is powerful because it saws through the time-consuming and difficult task of predicting all the strange ways people will ask for help. Using a tool like ChatGPT to build a massive library of curated responses may be helpful for states, but it’s nowhere near as efficient as the scary proposition of letting people query the AI directly.

Clarke and other CIOs insisted that even AI kept on a short leash offers government great potential in the form of more efficient work and a more convenient experience for the public. But Clarke’s fears of cross-pollinating agency data and algorithms making unsolicited suggestions drive at the core of what many government technologists have in mind when they imagine a citizen-centric government experience. It’s a thin line between helpful and intrusive — or even misleading or dangerous.

“There’s the consent piece, but then there’s … it could easily be drawing information from materials that are way past due,” Clarke said. “And so that creates bad information. Bad information in, bad information out. That scenario can establish distrust, it can establish frustration, it can get people very negative about their interactions with government very quickly.”

‘Robot work’ vs. ‘people work’

Utah Chief Technology Officer Dave Fletcher said he’s considered the possibility that AI, governed improperly, might run amok on his state’s network. He said he’s also pondered the potential devastation of a technological singularity, a worldwide point-of-no-return in which self-improving systems grow beyond human control. But he’s not that worried.

Readers Also Like:  Google exec issues ‘security’ alert to every iPhone owner – and says he ‘feels bad’ for them... - The US Sun

“AI is one of those inevitable technologies,” he said. “It’s like the internet in the 1990s. There were people who didn’t want to use the internet. There were agencies that said they didn’t want their employees using internet because it would just be a waste of time.”

Fletcher said the Utah AI Center for Excellence — a five-year-old group works with agencies to set goals for the technology’s use in the state — is meeting with Amazon, Google and Microsoft about the potential of their technologies to help “do their jobs better, make better decisions, all kinds of things.”

Before ChatGPT’s emergence, Utah didn’t have a policy specific to AI, just “guidelines,” Fletcher said. He said he’s now on the third draft of a generative AI policy, which looks at a wide array of concerns, including privacy, security, training and legal policy.

“We don’t want agencies and employees to enter private data and information into a model that isn’t private or controlled where we can ensure the confidentiality of the data,” he said. “We’re looking at copyright guidelines from the U.S. Copyright Office. We’re ensuring contractors maintain transparency and disclose use of generative AI when producing works that are going to be owned by the state. And [we must] confirm proper licensure of model training data.”

In Vermont, officials are taking a “center for enablement approach,” drafting policy, but also offering agencies templates that make it easier for them to use AI ethically, said Raiche, the state’s AI director. Vermont’s AI ethics policy centers on preserving public trust, he said, and includes provisions like always labeling documents with a human owner to preserve accountability. Another provision includes notifying members of the public if an AI ever plays a role in approving or denying an application and then offering recourse if the resident disagrees with the decision, he said.

Officials are drafting the policies because AI is coming. And, Raiche said, despite all the new concerns, he believes it’s going to be a net benefit to both the public and government.

“I chunk up work into people work and robot work and if the work is not creative, it’s not really people work anymore — it’s robot work,” he said. “And we should free people up to do more of that creative work. We’re making work more human.”



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.