security

This is why Nvidia deserves its $1 trillion valuation – Fortune


Computex, the massive personal computing trade show, is taking place in Taipei this week, its first time as an in-person event since before the pandemic. And when it comes to A.I., the show provided further evidence of just how far ahead of the game Nvidia is as the leading producer of the computer chips that are powering the current A.I. revolution.

Jensen Huang, the company’s Taiwan-born CEO, bantered with the crowd in Taiwanese during parts of his keynote address to the conference and was clearly reveling in his status as a homegrown hero as his company closed in on a $1 trillion market valuation—a milestone it hit today, a day after Huang’s keynote, becoming the first chipmaker ever to reach that lofty height. Thanks to investor enthusiasm for the generative A.I. boom that is being built atop Nvidia’s graphics processing units, the company’s stock is up more than 180% year to date.

At the show, Huang announced that Nvidia’s Grace Hopper GH200 “superchips”—as the company terms them—are now in full production. These chips combine Nvidia’s highest-performing Hopper H100 graphics processing units, now the top-of-the-line chip for generative A.I. workloads, with its Grace CPU, or central processing unit, that can handle a more diverse set of computing tasks.

Huang revealed that the company had linked 256 of these GH200 chips together using the company’s own NVLink networking technology to create a supercomputer that can power applications requiring up to 144 terabytes of memory. The new supercomputer is designed for training ultra-large language models, complex recommendation algorithms, and graph neural networks that are used for some fraud detection and data analytics applications. Nvidia said the first customers for this supercomputer will be Microsoft, Google, and Meta.

Of course, Nvidia has recently branched out from hardware and begun offering its own fully-trained A.I. foundation models. At Computex, the company tried to demonstrate some of these A.I. capabilities in a demo geared towards Computex’s PC and video gaming crowd. It showed a video depicting how its A.I.-powered graphics rendering chips and large language models can be coupled to create non-player characters for a computer game that are more realistic and less scripted, than those that currently exist. Well, the visuals in the scene, which was set in a ramen shop in a kind of Tokyo underworld, were arrestingly cinematic. But the LLM-generated dialogue, as many commentators noted, seemed no less stilted than the canned dialogue that humans script for non-player characters in existing games. Clearly, Nvidia’s Nemo LLM may need some further fine-tuning.

As any student of power politics or game theory knows, hegemons tend to beget alliances aimed at countering their overwhelming strength. In the past month, news accounts reported that Microsoft was collaborating with AMD, Nvidia’s primary rival in the graphics rendering sphere, on a possible A.I.-specific chip that could make Microsoft less reliant on purchasing Nvidia’s GPUs. (Microsoft later said aspects of the report were wrong, but that it has long had efforts to see if it could develop its own computer chips.) George Hotz, a Silicon Valley hacker and merry prankster best known for jailbreaking iPhones and Playstations and who went on to build a self-driving car in his own garage, also announced he was starting a software company called Tiny Corp. dedicated to creating software that will make AMD’s GPUs competitive with Nvidia’s. If that effort is successful, Hotz declared he would turn to building his own silicon. “If we even have a 3% chance of dethroning NVIDIA and eating in to their 80% margins, we will be very very rich,” Hotz wrote on his blog. “If we succeed at this project, we will be on the cutting edge of non-NVIDIA AI compute.”

Readers Also Like:  Varonis Adds Generative AI Capabilities to Leading Data Security ... - MarTech Series

In his blog, Hotz notes that most A.I. chip startups that hoped to dethrone Nvidia have failed. Some, such as Cerebras and Graphcore are still trying, but both have struggled to gain as much traction as they had hoped. Microsoft tried using Graphcore in its data centers but then pivoted away from the U.K.-based startup’s chips. And Hotz is right about identifying one of Nvidia’s biggest advantages: It’s not Nvidia’s hardware, it’s its software. Cuda, the middleware layer that is used to implement A.I. applications on Nvidia’s chips, is not only effective, it is hugely popular and well-supported. An estimated 3 million developers use Cuda. That makes Nvidia’s chips, despite their expense, extremely sticky. Where many rivals have gone wrong is in trying to attack Nvidia on silicon alone, without investing in building a software architecture and developer ecosystem that could rival Cuda. Hotz is going after Cuda.

But there is more to Nvidia’s market dominance than just powerful silicon and Cuda. There’s also the way it can link GPUs together inside data centers. One of Huang’s greatest acquisitions was Israeli networking company Mellanox, which Nvidia bought for $6.9 billion in 2019. Mellanox has given Nvidia a serious leg up on competitors like AMD. Michael Kagan, Nvidia’s chief technology officer, who had also been CTO at Mellanox before the acquisition, recently told me that one of the ways Nvidia had wrung more efficiency out of its data center GPUs was to move some of the computing into the network equipment itself. (He likened it to a pizza shop that, in order to get more efficient, equipped its delivery drivers with portable ovens so the pizza would finish cooking as the delivery driver drove it to a customer’s house.) And Nvidia isn’t sitting still when it comes to networking either. At Computex, the company announced a new ethernet network called Spectrum X that it says can deliver 1.7 times better performance and energy efficiency for generative A.I. workloads.

Improvements like this will make Nvidia very hard to catch. Of course, Nvidia isn’t perfect. And we’ll have more on one area of the generative A.I. race where it may have stumbled in the Brainfood section below.

With that, here’s more of this week’s A.I. news.

Jeremy Kahn
@jeremyakahn
jeremy.kahn@fortune.com

A.I. IN THE NEWS

The open letter meets the open sentence. A group of A.I. luminaries, including Turing Award winners Geoff Hinton and Yoshua Bengio, OpenAI CEO Sam Altman and chief scientist Ilya Sutskever, all three of Google DeepMind’s cofounders Demis Hassabis, Shane Legg, and Mustafa Suleyman, Anthropic CEO Dario Amodei, and many more signed on to a simple statement calling for world leaders to take the threat of human extinction from A.I. as seriously as the risk of nuclear war and deadly pandemics.

The statement—really just a 22-word sentence—was published by the Center for AI Safety (CAIS), a nonprofit group dedicated to mitigating the existential risk from A.I. Many of the individuals who signed the statement had not signed on to an earlier open letter, circulated by another A.I. safety nonprofit called the Future of Life Institute, that had demanded a six-month pause in the development of A.I. systems more powerful than OpenAI’s GPT-4 while the industry and government developed a regulatory framework around the technology. The CAIS statement was even vaguer about exactly what policies should be put in place to address what it called the extinction-level threat of A.I., but Dan Hendrycks, the center’s director, told me that “the statement focuses on the fact that there’s a shared understanding of the risks. A.I. scientists, professors, and tech leaders agree that these risks are serious and ought to be a global priority. I hope that this inspires additional thought on policies that could actually reduce these risks.”

Readers Also Like:  Smart solar tech a national security risk: opposition - Yahoo News Australia

Except, there isn’t actually a shared understanding of the risks. Many A.I. ethicists actually see the danger of extinction as extremely remote and a distraction from real-world harms from A.I. that are here today. My Fortune colleague David Meyer has more on the open statement and the controversy here.

Waymo and Uber partner on self-driving cars in Phoenix. Former rivals in the self-driving car space, Alphabet’s Waymo and Uber, are now partnering to introduce an autonomous taxi service in Phoenix, Ariz. Waymo will supply its self-driving cars, while Uber will contribute its network of riders. This collaboration will allow Uber users to request Waymo cars for food delivery through Uber Eats or transportation via the Uber app, Waymo explained in a blog post.

The Financial Times says A.I. won’t be writing any of its stories. Ruala Khalaf, editor of the U.K.-based business newspaper, said in a letter the paper published last week that “FT journalism in the new AI age will continue to be reported and written by humans who are the best in their fields and who are dedicated to reporting on and analysing the world as it is, accurately and fairly.” But she also said the paper would begin experimenting with how generative A.I. can assist in other ways such as infographics, diagrams, and photos (although she said any A.I.-generated photos would always be labeled as such). She also said, “The team will also consider, always with human oversight, generative AI’s summarising abilities.”

Lawyer lands in hot water after using ChatGPT to write legal brief. A lawyer in New York, Steven A. Schwartz, admitted to using ChatGPT to generate a legal brief that contained fabricated court decisions and quotations. The lawyer cited numerous nonexistent cases while representing a client in a lawsuit against Avianca airline, the New York Times reported. Schwartz claimed he was unaware of the A.I. chatbot’s potential for providing false information and expressed regret, pledging not to use the program without thorough verification in the future. The incident raises ethical concerns and highlights the need for lawyers—and people in every profession—to carefully fact-check any A.I.-generated content. It is likely we will see more incidents like this, with potentially serious consequences for the individuals involved, as generative A.I. becomes increasingly ubiquitous.

EYE ON A.I. RESEARCH

Meta launches a massive open-source multilingual speech-to-text system. Meta’s A.I. research team has debuted an A.I. model that can perform real-time speech-to-speech translation for almost any language, including those primarily spoken but not commonly written. For instance, the model works for Hokkien, an unwritten Chinese language, making it the first A.I.-powered speech translation system for this language, Venture Beat reported. Meta has made the model freely available as an open-source project. But the universal translator also has some real-world applications to Meta’s own business. The breakthrough could enable anyone to speak to anyone in any language and be understood, which could be a key asset for a virtual environment like the metaverse where people from many different countries might mingle. You can check out the model itself here.

Readers Also Like:  Making Healthcare Device Asset Management Efficient with Security Tech - Healthcare IT Today

FORTUNE ON A.I.

Rihanna singing or an A.I.-generated fake? The music industry is threatened by the latest buzzy technology—by Jeremy Kahn

As crypto embraces A.I., a major exchange scraps ChatGPT integration because ‘it’s very dangerous’—by Leo Schwartz

ChatGPT could rocket Microsoft’s valuation another $300 billion after Nvidia’s massive gains, according to analyst Dan Ives—by Tristan Bove

Former Google safety boss sounds alarm over tech industry cuts and A.I. ‘hallucinations’: ‘Are we waiting for s**t to hit the fan?’—by Prarthana Prakash

BRAINFOOD

The security vulnerabilities of generative A.I. models are becoming a hot topic. With many enterprises starting to put generative A.I. models into production, people are becoming more conscious of security vulnerabilities. 

This month, a trio of companies building open-source foundation models (EleutherAI, Stability AI, and Hugging Face), collaborated on an audit of Hugging Face’s Safetensors—a method of securely storing data representations in the deep learning models. Safetensors was developed because PyTorch, one of the most popular programming languages for deep learning, stored these data representations in a way that allowed an attacker to disguise malicious code as an A.I. model and then execute that code. Safetensors prevents this from happening, and the audit found Safetensors itself had no serious security flaws. The collaboration shows that the open-source community, at least, is getting serious about trying to come together and fix security holes. But it also highlights that there are potentially major cybersecurity issues with using generative A.I.

A month ago, Nvidia, realizing that concerns about large language models spewing toxic or inappropriate content or leaking sensitive data were holding back enterprise adoption of LLMs, such as the company’s Nemo models, introduced something called Nemo Guardrails. This was supposed to make it simple for a company to set up safeguards around a Nemo LLM to prevent it from giving answers that the company using it didn’t want it to provide. (I wrote about the development in Eye on A.I. here.) But today security researchers at San Francisco startup Robust Intelligence published research showing that they could use a variety of attacks to get Nemo to override these guardrails. For instance, a guardrail meant to keep an LLM on a single topic could be jumped if the user prompted the system using a statement that contained some information about the authorized topic and some information about an unauthorized topic. Yaron Singer, Robust Intelligence’s CEO, told me that the fundamental problem is that Nemo Guardrails relies on other LLMs to police the primary LLM, and all LLMs are essentially vulnerable to these kind of prompt injection attacks. Singer says that fundamentally different methods—including hard-coded rules—are probably necessary to create more resilient guardrails. But he acknowledged that these could come at the expense of some of the flexibility that LLMs provide in terms of the range of answers they can give, which is one of their main selling points. “There are always tradeoffs between functionality and security,” he says. He said, however, that enterprises need to think very hard about the use case where they applying an LLM. “Does an e-commerce chatbot really need to be able to role-play with a user?” he asks. Good question.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.