Hello and welcome back to Eye on A.I.
The biggest development in A.I. news from this past week is one we’ve been waiting for. President Joe Biden signed a long-awaited executive order aimed at restricting U.S.-based investments in Chinese companies operating in sensitive technology areas, including A.I.
The executive order specifically targets advanced computer chips, microelectronics, quantum information technologies, and certain artificial intelligence systems. The current proposal would require U.S. investors to notify the U.S. Treasury Department of certain transactions and prohibits others altogether. The restrictions will not apply retroactively to investments that have already been made, and there will be some carve-outs for intellectual property licensing, contracts to buy raw materials, and university-to-university research collaborations.
Regarding A.I. specifically, the order proposes a ban on investments into the development of software using A.I. systems designed for military, government intelligence, or mass surveillance. The U.S. Department of the Treasury Office of Public Affairs said in its fact sheet about the order that it welcomes public feedback on the A.I. category in particular during the 45-day comment period, including on the definitions surrounding A.I. technologies and their potential implications on scope.
In the order, Biden describes how China eliminates the barriers between commercial and defense sectors “for the purposes of achieving military dominance.” He cites numerous national security concerns, including “the development of more sophisticated weapons systems, breaking of cryptographic codes, and other applications that could provide these countries with military advantages.” The order refers to “countries of concern” throughout, but China is the only country cited.
Beijing was quick to condemn the order, with the Ministry of Foreign Affairs accusing the U.S. of “overstretching the concept of security and politicizing business engagement,” and the Ministry of Commerce saying it “reserves the right to take measures.”
In a rare show of bipartisanship, U.S. lawmakers from both the Democratic and Republican parties are applauding the order as a solid step while pledging to go further.
The move to ban investments on certain A.I. systems in particular creates tough challenges for the administration and poses questions that the tech industry at large is currently grappling with. Unlike chips, A.I. is not so easily defined—especially at this moment when A.I. is advancing quickly, being used in a wide variety of new ways, and is proving to be more multifunctional than ever before. And for many people and entities developing A.I., all-purpose artificial general intelligence (AGI) that could act outside the bounds of what it was taught, is the very goal. Indeed, according to the Wall Street Journal’s sources, the Biden administration officials who crafted the order struggled to distinguish the boundaries between purely commercial A.I. and A.I. that could be used for military means.
There are no easy answers to many of these questions. In the meantime, we can expect the decoupling of the U.S. and China tech industries to continue.
U.S. investors have participated in nearly $200 billion in deals in China over the last six years, and venture capital investments in particular have been focused in the very technology areas targeted by the order, according to Pitchbook.
But already, the rising tension between the countries has been having a measurable impact on business. The House Select Committee on the Chinese Communist Party recently sent letters to four U.S. venture capital firms expressing “serious concern” about their investments in Chinese tech startups.
Many venture capitalists describe proactively pulling back from doing deals in China, seeing such deals as risky. In 2022, Chinese companies saw a sharp decline in investments from U.S. private equity and venture capital. The number of deals dropped by 40% from the previous year, and the aggregate value of U.S. investments in China declined by roughly 76% year over year from $28.92 billion to just $7.02 billion, according to S&P Global Market Intelligence data.
“I don’t know anyone that’s doing early-stage China investing from the U.S.,” Steve Sarracino, founder of Activant Capital, told CNBC, citing hedge funds as the only exception.
And with that, here’s the rest of this week’s A.I. news.
Sage Lazzaro
sagelazzaro.com
A.I. IN THE NEWS
Biden administration launches “AI Cyber Challenge” in collaboration with OpenAI, Google, Microsoft, and Anthropic. What better way to protect critical infrastructure than with a good, old-fashioned hackathon? The two-year competition hosted by DARPA will task challengers to build A.I. systems that can proactively spot and fix software vulnerabilities, according to Engadget. With government systems increasingly being targeted in ransomware attacks, the administration is framing the contest as a way for public and private sectors to work together on this critical security issue and is offering nearly $20 million in prize money. OpenAI, Google, Microsoft, and Anthropic will provide both their technologies and expertise to participating teams.
Nvidia announces new chip platform designed for complex generative-A.I. workloads. The new Nvidia GH200 Grace Hopper platform will be available in a variety of configurations, with the dual configuration offering 3x more bandwidth and up to 3.5x more memory capacity than the current generation, according to the announcement. The platform is based on the Grace Hopper Superchip, which can be connected with additional Nvidia Superchips to work in concert to deploy the type of large and complex models used for generative A.I. At the same time, demand for Nvidia’s current chips continues to skyrocket, with the Financial Times reporting that Chinese tech giants including Baidu, ByteDance, Tencent, and Alibaba are in a buying frenzy to scoop up $5 billion worth of chips out of fear regulators will soon clamp down.
The New York Times updates its terms of service to prevent A.I. scraping of its content. That’s according to Adweek, which reports that the update pertains to content from text and audio to images and metadata and everything in between. The specific callout of A.I.—as opposed to just data scraping—is generally new in the world of ToS and, following the Zoom debacle last week, further shows that ToS documents are emerging as a battlefront in the new A.I. landscape. OpenAI recently provided a way for people to block its web crawler, but of course, this option comes after the company already scraped the internet.
Women of color say popular A.I.-powered headshot software is whitening their skin and distorting their features. With generative A.I. tools becoming popular for everything—including turning casual selfies into headshots—women of color are taking to social media to show how the technology is misfiring in several ways that equate professionalism with whiteness. “The overall effect definitely just made me look like a white person,” Rona Wang, who is Asian American, told the Wall Street Journal. The app didn’t alter her cluttered background or swap her T-shirt for something more professional, yet it changed the color of her skin, lightened her hair, and made her eyes blue and more round.
EYE ON A.I. RESEARCH
And now we wait. Perhaps the biggest security investigation of large language models just wrapped up Sunday at the DefCon hacker convention in Las Vegas, where some 2,200 competitors spent three days trying to break leading large language models and expose flaws embedded in their systems. The challenge, referred to as the Generative Red Team (GRT) Challenge, was announced by the Biden administration in May and has been of keen interest to U.S. officials as they seek to understand and regulate the risks and uses of A.I. technologies.
The findings will be made available to approved researchers for further investigation, but they won’t be made public until February. Yet with several government entities, including the White House and Congressional Artificial Intelligence Caucus, driving the event and listed as public sector partners, it’s possible the results could inform government action even sooner. In the meantime, you can read more about the challenge here.
FORTUNE ON A.I.
Paul Graham calls A.I. ‘the exact opposite of a solution in search of a problem’ —Steve Mollman
Even the Pope is worried humanity needs ‘protecting’ from A.I.—he was a deepfake target himself —Eleanor Pringle
BRAINFOOD
Poking the A.I.s. Another A.I. doom story is circling the internet, and this time it’s serving up “poison bread sandwiches” and “mosquito-repellent roast potatoes.”
In a bid to experiment with generative A.I. and help customers use up leftover ingredients, New Zealand grocery chain Pak ‘n’ Save created a recipe generator app and, well, you can see where this is going. Users entered ingredients that absolutely do not belong in food, and the app served up inedible and even poisonous recipes. One recipe would create chlorine gas, but the app dubbed it “aromatic water mix” and described it as “the perfect nonalcoholic beverage to quench your thirst and refresh your senses,” according to the Guardian.
By now, this sort of poking at A.I. systems, where we seek to push them to their limits and over their guardrails, is entirely predictable with the release of any new platform. Researchers, journalists, and even users do it not only as an entertaining experiment but also out of genuine curiosity of just how far A.I. tools will go. One might argue that no reasonable person would blindly listen to technology and eat potentially poisonous food, but then again drivers regularly follow GPS systems into bodies of water.
Either way, expecting a recipe app to generate only safe and edible recipes seems like a pretty fair bar. And until regulation forces tech companies to thoroughly mitigate issues prior to launch, it’s hard to imagine we’ll stop prompting A.I. tools in questionable faith only to find another potential Black Mirror episode right below the surface.