Microsoft has announced its Azure OpenAI Service is now available to the US government in an even more secure format, with specific compliance promised for regulatory standards for classification and security.
With Azure OpenAI Service for government, customers (which currently include United States federal, state, and local government and their partners) can get access to OpenAI’s GPT large models – those behind the popular ChatGPT AI writer.
Microsoft hopes that moving those models from the commercial cloud to a new and secure architecture will enable the government to benefit from the time-saving tech that’s stolen many of the headlines in recent months.
Azure OpenAI Service for government is now live
The service’s REST APIs give the government access to language and multimodal models including GPT-3, GPT-4, as well as Embeddings.
The company expects government agencies to leverage these existing models to develop their own AI-enabled applications to help improve efficiency across their operations.
With Azure OpenAI Service for government comes the usual list of benefits and use cases that can be expected, including content generation, summarization, semantic search optimization, and code generation and rectification.
In a bid to convince government bodies that its services are secure, Microsoft sheds some light on the 250,000 km of fiber optic and undersea cables that make up its global network backbone. Government data promises never to leave this and enter the public Internet.
In the announcement, Microsoft CTO Strategic Missions and Technologies, Bill Chappell, explained: “Only the queries submitted to the Azure OpenAI Service transit into the Azure OpenAI model in the commercial environment through an encrypted network and do not remain in the commercial environment.”
Furthermore, Chappell confirmed that government data is not used for training OpenAI’s models in the same way that regular consumer information is used.
How the US government plans to use generative AI to its benefit remains to be seen, but its approach is likely to be cautious at first as it gets to grips with integrating artificial intelligence with the sensitive data it handles.